Skip to Main Content

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Connor McDonald

Thanks for the question, parag j patankar.

Asked: October 26, 2005 - 3:22 am UTC

Last updated: October 28, 2024 - 6:57 am UTC

Version: 9.2

Viewed 1000+ times

You Asked

Hi Tom,

Suppose I am having following SQL query

UPDATE TB38O
SET G000KE =
(SELECT TB39G.A6000D FROM TB39G
WHERE
TB39G.A0020C = TB38O.A020KF AND
TB39G.A0090C = TB38O.A090KF AND
TB39G.A1010Y = TB38O.B010KF AND
TB39G.A6000Y = TB38O.G000KF AND
TB39G.A0230Z = TB38O.A230KF AND
TB39G.A9490C = TB38O.J490KE AND
rownum < 2
)
WHERE G000KE='999'
AND EXISTS (SELECT '1' FROM TB39G
WHERE
TB39G.A0020C = TB38O.A020KF AND
TB39G.A0090C = TB38O.A090KF AND
TB39G.A1010Y = TB38O.B010KF AND
TB39G.A6000Y = TB38O.G000KF AND
TB39G.A0230Z = TB38O.A230KF AND
TB39G.A9490C = TB38O.J490KE) ;

In this query TB39G and TB380 can not have primary keys.

In this type of queries is there any way to avoid 3 FTS ( 1 FTS of TB380 and 2 FTS of TB39G ) of the tables in set ... and where clause and having only 1 FTS of each table ?

regards & thanks
pjp

and Tom said...

that is scary scary scary stuff.

"rownum < 2" - so basically, you are updating G000KE to a random record from TB39G
where that key matches -- if there are 2 or more records, you have NO CONTROL over which is used.

That is called "non-deterministic", using the same data in two different databases - you'll get two different end results.

Very very scary to me.

Without keys, optimization will be hard since you cannot update the join. You'll have lots more than 3 full table scans likely (you might have a full scan of TB39G
for each row in TB38O to be updated)

You can avoid part of the work by

UPDATE TB38O
SET G000KE =
nvl( (SELECT TB39G.A6000D FROM TB39G
WHERE
TB39G.A0020C = TB38O.A020KF AND
TB39G.A0090C = TB38O.A090KF AND
TB39G.A1010Y = TB38O.B010KF AND
TB39G.A6000Y = TB38O.G000KF AND
TB39G.A0230Z = TB38O.A230KF AND
TB39G.A9490C = TB38O.J490KE AND
rownum < 2
), '999' )
WHERE G000KE='999'
/

but that rownum < 2 - that is very very scary.

if the columns are in fact UNIUQE - and if you put on the proper unique/primary key constraint (obviating the need to use rownum at all), you can update the join which would be the most efficient method in all likelyhood.

Rating

  (27 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

SQL Optimization

Parag J Patankar, October 27, 2005 - 2:40 am UTC

Hi Tom,

Thanks for your excellent answer as usual. Will you pl explain in more your quote of "That is called "non-deterministic", using the same data in two different databases - you'll get two different end results." if possible with example ?

I assume that different databases means different databases in Oracle.

regards & thanks
pjp


Tom Kyte
October 27, 2005 - 6:50 am UTC

ops$tkyte@ORA10GR2> create table t1 ( x int, y int );
ops$tkyte@ORA10GR2> create table t2 ( x int, y int );
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> insert into t1 values ( 1, 1 );
ops$tkyte@ORA10GR2> insert into t1 values ( 1, 2 );
ops$tkyte@ORA10GR2> insert into t2 values ( 1, 2 );
ops$tkyte@ORA10GR2> insert into t2 values ( 1, 1 );
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select * from t1 minus select * from t2;
ops$tkyte@ORA10GR2> select * from t2 minus select * from t1;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select * from t1 where x = 1 and rownum < 2;

         X          Y
---------- ----------
         1          1
ops$tkyte@ORA10GR2> select * from t2 where x = 1 and rownum < 2;

         X          Y
---------- ----------
         1          2



t1 and t2 - same data.

where x = 1 and rownum < 2 - different answers.

It is not deterministic, same data, different order of rows, different answers.


sends
chills
down
my 
spine....
 

Help required

Anil Pant, October 28, 2005 - 3:37 am UTC

Hello,
We have application which maintains the customer details. One of the screen has a search option and user can search on any or all of the 10-15 fields (customer id, customer name, address line1, address line2, city, zip etc).
If the user tries to search on Customer name or address line 1 or address 2 or zip code it takes lot of time.
We are using Cost based and the statistics are updated.

I've to give them a solution how best the query can be optimized.

Customer Id is the key column. There are index on few of the columns.

One proposed solution given to me is if the user tries to search on any one of the non-key columns like address line 1 or address line 2 or city, try to hit the key columns or try to hit the key tables.

I did not understand how to write such a query.

I know it very vague question but your inputs will be appreciated.

We are working on Oracle 9i.

Thanks
Anil Pant


Tom Kyte
October 28, 2005 - 12:47 pm UTC

it is way too vague.

I've no idea what you mean by "One proposed solution given to me is if the user tries to search on any one of
the non-key columns like address line 1 or address line 2 or city, try to hit
the key columns or try to hit the key tables."

that sort of implies "multiple tables" - hence, answering this is sort of "hard" since I've no clue what the schema might be.

SQL Query optimization

Parag J Patankar, November 02, 2005 - 4:44 am UTC

Hi Tom,

I am having following query

select tomdate, spotdate
from (
select dt tomdate, dt+1 spotdate
from ( select sysdate+rownum dt
from all_objects
where rownum <= 25
)
where to_char(dt, 'DY') not in ('SAT', 'SUN')
and to_char(dt+1, 'DY') not in ('SAT', 'SUN')
and to_char(dt, 'RRRRMMDD') not in (
select trunc(dt)
from holiday
)
and to_char(dt+1, 'RRRRMMDD') not in (
select trunc(dt)
from holiday
)
where rownum = 1
/

Which is calculating 2 working days. I just want to avoid two scans of "holiday" tbale. Will you pl tell me what is the best way to do this ?

regards & thanks
pjp

Tom Kyte
November 03, 2005 - 4:48 am UTC

why select trunc(dt)??? why wouldn't holiday just have the days in there (would preclude index access to holiday - never never apply functions to database columns unless and until you have to)


and why RRRRMMDD with a to_char??????  why are you doing a to_char() to turn a date into a string, to compare to a DATE????????  I'm not getting this at all - why all of the conversions?

The optimizations should start with getting rid of the to_chars with rrrrmmdd and the truncs.

suggestions:

holiday is an index organized table.
holiday already HAS the trunc of the date.
check to see if the date is in fri, sat, sun (don't deal with dt+1).
use not exists to see if dt or dt+1 exists, instead of not in in this case..


ops$tkyte@ORA10GR2> drop table holiday;
ops$tkyte@ORA10GR2> create table holiday ( dt date primary key ) organization index;
ops$tkyte@ORA10GR2> insert into holiday
ops$tkyte@ORA10GR2> select trunc(sysdate)+rownum+50 from all_objects where rownum <= 500;
ops$tkyte@ORA10GR2> commit;
ops$tkyte@ORA10GR2> alter session set nls_date_format = 'rrrrmmdd';
ops$tkyte@ORA10GR2> */
ops$tkyte@ORA10GR2> set autotrace on statistics
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> with dates
  2  as
  3  (select trunc(sysdate)+level dt
  4     from dual
  5    where to_char(trunc(sysdate)+level,'DY') not in ( 'FRI', 'SAT', 'SUN' )
  6  connect by level <= 25)
  7  select dt, dt+1
  8    from dates
  9   where not exists ( select null
 10                        from holiday h
 11                       where h.dt = dates.dt
 12                          or h.dt = (dates.dt+1))
 13     and rownum = 1
 14  /

DT       DT+1
-------- --------
20051107 20051108


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        472  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select tomdate, spotdate
  2  from  (
  3         select dt tomdate, dt+1 spotdate
  4         from ( select sysdate+rownum dt
  5                from all_objects
  6                where rownum <= 25
  7        )
  8        where to_char(dt, 'DY') not in ('SAT', 'SUN')
  9        and   to_char(dt+1, 'DY') not in ('SAT', 'SUN')
 10        and   to_char(dt,  'RRRRMMDD') not in (
 11              select trunc(dt)
 12              from holiday
 13        )
 14        and   to_char(dt+1,  'RRRRMMDD') not in (
 15              select trunc(dt)
 16              from holiday
 17        )
 18            )
 19  where rownum = 1
 20  /

TOMDATE  SPOTDATE
-------- --------
20051107 20051108


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         27  consistent gets
          0  physical reads
          0  redo size
        481  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 

SQL query

Parag J Patankar, November 07, 2005 - 7:39 am UTC

Hi Tom,

Brilliant explanation to my question. One more question regarding SQL

I have following query 

17:51:38 SQL> l
  1    select count(*)
  2    from
  3    ( select to_date(v09040, 'RRRRMM') v09040
  4      from tw937
  5      where trim(v09040) <> '013' and
  6            trim(v09040) <> '024' ) tw937
  7    where
  8*   (sysdate - v09040 -1) > 800

If I run this query I am getting following error message

17:51:37 SQL> @toto
  ( select to_date(v09040, 'RRRRMM') v09040
                   *
ERROR at line 3:
ORA-01843: not a valid month

But If I run : ( inner query of above SQL )

  1  select distinct to_date(v09040, 'RRRRMM')
  2  from tw937
  3  where trim(v09040) <> '013' and
  4* trim(v09040) <> '024'
17:57:32 SQL> /

TO_DATE(V
---------
01-JUN-03
01-JUL-03
01-AUG-03
01-SEP-03
01-OCT-03
01-NOV-03
01-DEC-03
01-JAN-04
01-FEB-04
01-MAR-04
01-APR-04
01-MAY-04
01-JUN-04
01-JUL-04
01-AUG-04
01-SEP-04
01-OCT-04
01-NOV-04
01-DEC-04
01-JAN-05
01-FEB-05
01-MAR-05
01-APR-05
01-MAY-05
01-JUN-05
01-JUL-05
01-AUG-05
01-SEP-05

28 rows selected.

my query is executing successfully. Will you pl tell me what is the problem ? If I run only inner query seperately from SQL then why it is successful ?

regards & thanks
pjp

Note : I have run distinct as this table is having few thousand records
 

Tom Kyte
November 07, 2005 - 10:46 am UTC

1 select count(*)
2 from
3 ( select to_date(v09040, 'RRRRMM') v09040
4 from tw937
5 where trim(v09040) <> '013' and
6 trim(v09040) <> '024' ) tw937
7 where
8* (sysdate - v09040 -1) > 800

with predicate pushing and view merging is not any different to us than:


select count(*)
from tw937
where trim(v09040) <> '013'
and trim(v09040) <> '024' ) tw937
and (sysdate - to_date(v09040,'rrrrmm') -1) > 800


Welcome to the world of "this is what happens when you don't use appropriate datatypes to hold DATES, NUMBERS and STRINGS - you get tons of errors"

(do you really need that trim, if so WHY??? ugh).... Anyway:

select count(*)
from tw937
where trim(v09040) <> '013'
and trim(v09040) <> '024'
and (sysdate - case
when trim(v09040) <> '013'
and trim(v09040) <> '024' )
then to_date(v09040,'rrrrmm')
end -1) > 800
actually:

select count(*)
from tw937
where (sysdate - case
when trim(v09040) <> '013'
and trim(v09040) <> '024' )
then to_date(v09040,'rrrrmm')
end -1) > 800

is sufficient (since the sysdate-.... will be NULL if it is not to_date'ed

is always "safe" IF you can be 100% sure that when that column is not 013 or 024 - it contains a date (which you cannot be of course, some day someone will put something "bad" in there - Murphy says so....)



A possible solution for Anil Patel's query

Bipul, November 07, 2005 - 10:59 am UTC

Hi.

You can use Oracle text search for efficient searching. Use a pl/sql procedure as user_datasource to build XML from the various columns of your table and then create text index on that XML. You will then need to dynamically build the query using CONTAINS and "within" clause to do text search.

For example, add a VARCHAR2(1) column called text_xml to the table. Create a pl/sql user_datastore procedure which will generate an XML from different columns of the table

your XML will look like

<City>London</City>
<AddressLine1>Middlesex House</AddressLine1>
...

Create text index on this text_xml column using the user_datastore.

Generate the query which will look like
Select <blah> from <table_name >
WHERE CONTAINS (text_xml, 'london within CITY') > 0;

HTH
-bipul


SQL Query

Parag J Patankar, November 07, 2005 - 11:58 pm UTC

Hi Tom,

Thanks for your answer. Tom, you said
----------------------------------------------------------
  1    select count(*)
  2    from
  3    ( select to_date(v09040, 'RRRRMM') v09040
  4      from tw937
  5      where trim(v09040) <> '013' and
  6            trim(v09040) <> '024' ) tw937
  7    where
  8*   (sysdate - v09040 -1) > 800

with predicate pushing and view merging is not any different to us than:


select count(*)
  from tw937
 where trim(v09040) <> '013' 
   and trim(v09040) <> '024' ) tw937
   and (sysdate - to_date(v09040,'rrrrmm') -1) > 800
------------------------------------------------------------
Will you pl tell me what is "predicate pushing and view merging " in simple terms ?

Secondly, if I run first query 
  1  select count(*)
  2  from
  3  ( select to_date(v09040, 'RRRRMM') v09040
  4    from tw937
  5    where v09040 <> '013' and
  6          v09040 <> '024' ) tw937
  7  where
  8* (sysdate - v09040 -1) > 800
10:07:26 SQL> /
( select to_date(v09040, 'RRRRMM') v09040
                 *
ERROR at line 3:
ORA-01843: not a valid month

and if I run query ( I think oracle will run convert above query as below )

  1  select count(*)
  2    from tw937
  3   where v09040 <> '013'
  4     and v09040 <> '024'
  5*    and (sysdate - to_date(v09040,'rrrrmm') -1) > 800
10:08:31 SQL> /

  COUNT(*)
----------
     44270

1 row selected.

Will pl tell me why second query is running not he first one ? ( Sorry I have not understood your explanation given above in this thread for same question )

best regards & thanks
pjp 

Tom Kyte
November 08, 2005 - 9:58 pm UTC

see
</code> https://www.oracle.com/technetwork/issue-archive/2005/05-mar/o25asktom-089621.html <code>

and look for Views and Merging/Pushing




the order of evaulation of the predicate - what gets done first, is up to the optimizer. In one case you got lucky, in the other not lucky.


You should do it as I said - always check and convert using case. You really do not want to rely on the order of predicate evaluation to make your app safe - we can and will change the order of evaluation.


storing dates in strings, you do not even want to know what I think about that horrible practice.

parag jayant patankar, December 08, 2005 - 2:23 am UTC

Hi Tom,

As per your suggestion of SQL optimization, I have modified the query as below

UPDATE TB38O
SET G000KE =
nvl( (SELECT TB39G.A6000D FROM TB39G
 WHERE
 TB39G.A0020C = TB38O.A020KF AND
 TB39G.A0090C = TB38O.A090KF AND
 TB39G.A1010Y = TB38O.B010KF AND
 TB39G.A6000Y = TB38O.G000KF AND
 TB39G.A0230Z = TB38O.A230KF AND
 TB39G.A9490C = TB38O.J490KE AND
 rownum < 2
), '999' )
WHERE G000KE='999'
/

According to me execution time of this query is long. It is taking 22 mins for updating 127015 records.

127015 rows updated.

Elapsed: 00:22:01.60

  1* select count(*) from tb38o
12:24:04 SQL> /

  COUNT(*)
----------
    144811

1 row selected.

12:24:05 SQL> select count(*) from tb39g;

  COUNT(*)
----------
     53729

Both tables are not having any index. Explain plan shows following

12:36:53 SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------

-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | UPDATE STATEMENT     |             |     6 |   198 |   240  (41)|
|   1 |  UPDATE              | TB38O       |       |       |            |
|*  2 |   TABLE ACCESS FULL  | TB38O       |     6 |   198 |   240  (41)|
|*  3 |   COUNT STOPKEY      |             |       |       |            |
|*  4 |    TABLE ACCESS FULL | TB39G       |     1 |    27 |    91  (44)|
-------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("SYS_ALIAS_6"."G000KE"=999)
   3 - filter(ROWNUM<2)
   4 - filter("TB39G"."A9490C"=:B1 AND "TB39G"."A0230Z"=:B2 AND
              TO_NUMBER("TB39G"."A0090C")=:B3 AND TO_NUMBER("TB39G"."A1010Y")=:B4 AND
              TO_NUMBER("TB39G"."A6000Y")=:B5 AND TO_NUMBER("TB39G"."A0020C")=:B6)

19 rows selected.

Kindly guide me.

regards & thanks
pjp
 

Tom Kyte
December 08, 2005 - 6:58 am UTC

well, an index would not hurt, don't you think?

you do see the full scan of a table for each row in the outer table right?

If you are unwilling to index (and hey, if you put a UNIQUE constraint..... we could update the join and boy how good would this go)....

Small mistake in your paper?

martin, December 08, 2005 - 10:52 am UTC

</code> https://asktom.oracle.com/Misc/oramag/on-rollups-merges-and-moves.html <code>

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
|* 1 | VIEW | V1 | | | |
| 2 | COUNT | | | | |
| 3 | MERGE JOIN | | | | |
| 4 | SORT JOIN | | | | |
| 5 | TABLE ACCESS FULL | T2 | | | |
|* 6 | SORT JOIN | | | | |
| 7 | TABLE ACCESS FULL | T1 | | | |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
--------------------------------------------------------
1 - filter("V1"."USERNAME"='FRED')
6 - access("T1"."USER_ID"="T2"."USER_ID")
filter("T1"."USER_ID"="T2"."USER_ID")

The appearance of the VIEW step (Id=1) in Listing 1 shows that the view was not merged. The fact that the filter WHERE USERNAME='FRED' is way up there >>(Id=6)<< shows that the predicate was not pushed into the view.

Did you mean id=1 instead of id=6?


Tom Kyte
December 08, 2005 - 12:01 pm UTC

indeed.

parag jayant patanakr, December 08, 2005 - 11:59 am UTC

Hi Tom,

You have suggested to create index on the tables, if possible unique index. Currently I have created index on tb39g table and update was comparatively much faster i.e finished in 3 min to 22 mins. Explain plan after that shows

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 3914 | 126K| 243 (41)|
| 1 | UPDATE | TB38O | | | |
|* 2 | TABLE ACCESS FULL | TB38O | 3914 | 126K| 243 (41)|
|* 3 | COUNT STOPKEY | | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| TB39G | 1 | 34 | 45 (3)|
|* 5 | INDEX SKIP SCAN | TOTO | 1 | | 44 (3)|
----------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("SYS_ALIAS_6"."G000KE"=999)
3 - filter(ROWNUM<2)
5 - access("TB39G"."A0230Z"=:B1 AND "TB39G"."A9490C"=:B2)
filter("TB39G"."A9490C"=:B1 AND "TB39G"."A0230Z"=:B2 AND
TO_NUMBER("TB39G"."A6000Y")=:B3 AND TO_NUMBER("TB39G"."A1010Y")=:B4 AND
TO_NUMBER("TB39G"."A0090C")=:B5 AND TO_NUMBER("TB39G"."A0020C")=:B6)

It is showing cost : 243. Where my update without any index on table was taking 22 mins was showing cost 240. ( Reference my question today ). as

-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 6 | 198 | 240 (41)|
| 1 | UPDATE | TB38O | | | |
|* 2 | TABLE ACCESS FULL | TB38O | 6 | 198 | 240 (41)|
|* 3 | COUNT STOPKEY | | | | |
|* 4 | TABLE ACCESS FULL | TB39G | 1 | 27 | 91 (44)|
-------------------------------------------------------------------------

My question is why CBO decided to pick up the plan having higher cost ? ( CBO picked up the plan of having cost 243 instead of 240 )

regards & thanks
pjp

Tom Kyte
December 08, 2005 - 12:07 pm UTC

forget the cost, do not compare the cost of two queries. If you want to explore how the costing all works - get the book referenced on my home page by Jonathan Lewis.


you did not create the right index, I would index all of these columns:

TB39G.A0020C = TB38O.A020KF AND
TB39G.A0090C = TB38O.A090KF AND
TB39G.A1010Y = TB38O.B010KF AND
TB39G.A6000Y = TB38O.G000KF AND
TB39G.A0230Z = TB38O.A230KF AND
TB39G.A9490C = TB38O.J490KE AND

UNIQUELY (use a primary key!!! then we can update a join, you want to see fast.....)



Please Explain

Ron, December 08, 2005 - 4:58 pm UTC

Tom,

I apologize but what do you mean by "forget the cost, do not compare the cost of two queries"?

I just finished reading Jonathan's new book and that's one of his core messages: Compare Costs.

Thanks.

Tom Kyte
December 09, 2005 - 1:28 am UTC

That is not one of his core messages - his message is "understand how the cost is derived".


what I said was pretty unambigous, I thought...

Comparing Query Costs

Gary S, December 08, 2005 - 8:41 pm UTC

My understanding of this issue is that you can compare the cost of two plans, as long as they are for the same query, but that you cannot compare costs from plans for two different queries.

I'm going to include a link to another article by Tom, along with an excerpt from it.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:313416745628 <code>

<tom>
You cannot compare the cost of 2 queries with eachother. They are simply
not comparable.

When we get a query, we come up with lots of plans. Each step of the plan is
assigned some relative cost. At the end, we apply a function to derive the
total cost of that query for each plan. These costs (which are ONLY visible to
the optimizer) can be compared as they are for the same exact SQL using the same
exact environment.

If we get another query -- another similar query -- we go through the same
steps, build lots of plans, assign a cost, pick the one with the lowest cost
from that set.

We cannot however take those 2 queries and compare the relative cost of each to
the other. Those costs are specific to those queries.

Many things influence the "cost" of a query. The "cost" is just some artificial
number we arrived at to select a query given a certain environment. Take a
query and get its cost. Now, add a hint that makes the query run faster and
check out its cost -- it might be higher, might be lower then the first. Why?
Because the environment with the hint had us adjust the internal numbers we
assign to various operations -- in an attempt to make your hint "look like" the
best way to go. It might have adjusted the over cost for the plan you wanted to
be lower then the plan we did without the hint, or it may have made the plan we
did without the hint extremely expensive (artificially inflated its cost).

Don't compare them, you cannot -- they might as well be random numbers.
</tom>

Hope this helps ..... ... ... ....

Tom Kyte
December 09, 2005 - 1:35 am UTC

by adding physical structures, you have two different queries as far as I'm concerned.

Delay in invoking a DBA Job Process

jayasree, December 09, 2005 - 3:14 am UTC

Hi Tom,

We have created a new job (to run every 5 minutes) in the dba_jobs table.

The Last Date and Next Date at the of creating the job

Last Date : 12/9/2005 11:44:05 AM
Next Date : 12/9/2005 11:49:05 AM

After 5 min (after the first execution)


Last Date : 12/9/2005 11:49:07 AM
Next Date : 12/9/2005 11:54:07 AM

We see that there is a delay in running the job by 2 seconds.

Cud u guide us in getting it started at exactly the same time it schedule for?

Thanks in advance.

Rgds,

Jayasree.



SQL query and DBMS_JOB? - Kind of "off topic"

Lars Stampe Villadsen, December 09, 2005 - 4:48 am UTC

Try the search function here on asktom...

One good one is:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1253700499661 <code>

I hear what you're saying...

Ron, December 09, 2005 - 3:06 pm UTC

but after reading Jonathan's book, comments like
<The "cost" is just some artificial number> and
<they might as well be random numbers>
aren't consistent with Jonathan's.

In his book, Jonathan explains what COST is with the following Q an A: "What does the cost represent? - The answer is simple: the cost represents (and has always represented) the optimizer's best estimate of the time it will take to execute the statement." and he continues "The cost is the time spent on single-block reads, plus the time spent on multiblock reads, plus the CPU time required, all divided by the time it takes to do a single-block read. Which means that the cost is the total predicated execution time for the statement, expressed in units of the single-block read time."

I'm trying to understand, then, how COST could be <random> and <artificial>.

Thanks for bearing with me.

Tom Kyte
December 10, 2005 - 4:58 am UTC

My perspective is people get hung up on the cost. Totally and completely hung up on the cost.

I've seen shops that say "no query with cost more than 100", that is "not wise". Do all queries with a cost of "50" run at the same speed?

"best estimate", it is just a number, do not get hung up on THAT NUMBER.

You know what counts - how does it perform.


Keep reading Jonathans book - take the example on Zodiac signs and date of birth.

How many people are Aries? about 1/12th
How many people are born in december? About 1/12th

How many Aries are born in December? CBO says about 1/144th of all records. I'd say zero.

The CBO is a mathematical model. If you change the inputs (change the assumptions, add indexes, reorg the structure, do something differently), the inputs to the model change drastically as can the results - hence don't try to compare them so much as understand what the cost represents and how it is arrived at.



How to optimize such query ?

Parag Jayant Patankar, February 16, 2006 - 4:47 am UTC

Hi Tom,

I am having following 3rd party "BIG" query, which is using nothing but same table for calculating some value

UPDATE TB35H c
SET (c.J8700A, c.J8700B, c.J8700C) = (SELECT
courant.a6760 - nvl(precedent.a6760,0),
courant.a6800 - nvl(precedent.a6800,0),
courant.a6800e - nvl(precedent.a6800e,0)
from TB35H courant, TB35H precedent
where
courant.A0460=C.A0460 and
courant.A090EE=c.A090EE and
courant.B010EE=c.B010EE and
courant.G000EE=c.G000EE and
courant.A230EE=c.A230EE and
courant.A00903=c.A00903 and
courant.A10103=c.A10103 and
courant.A60003=c.A60003 and
courant.A02303=c.A02303 and
courant.A9490D=c.A9490D and
precedent.A0460(+)=add_months(courant.a0460,-1) and
courant.A090EE=precedent.A090EE(+) and
courant.B010EE=precedent.B010EE(+) and
courant.G000EE=precedent.G000EE(+) and
courant.A230EE=precedent.A230EE(+) and
courant.A00903=precedent.A00903(+) and
courant.A10103=precedent.A10103(+) and
courant.A60003=precedent.A60003(+) and
courant.A02303=precedent.A02303(+) and
courant.A9490D=precedent.A9490D(+))
WHERE a0460=(select decode(C65101,3,C65201,C81001) from TDF31)
;

Can you show me how to write "proper query" when such types of updates using same table ?

It is showing following explain plan

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 138 | 2 (50)|
| 1 | UPDATE | TB35H | | | |
|* 2 | INDEX RANGE SCAN | X01B35H | 1 | 138 | |
| 3 | TABLE ACCESS FULL | TDF31 | 1 | 16 | 3 (34)|
| 4 | NESTED LOOPS OUTER | | 1 | 276 | 2 (50)|
| 5 | TABLE ACCESS BY INDEX ROWID| TB35H | 1 | 138 | 2 (50)|
|* 6 | INDEX UNIQUE SCAN | X01B35H | 1 | | |
| 7 | TABLE ACCESS BY INDEX ROWID| TB35H | 1 | 138 | |
|* 8 | INDEX UNIQUE SCAN | X01B35H | 1 | | |
----------------------------------------------------------------------------------

thanks for educating me.

best regards
pjp

Tom Kyte
February 16, 2006 - 11:50 am UTC

since you cannot rewrite it, you will need to work with the 3rd party vendor.

but if those card= values are correct in the above, this thing would be super fast. maybe statistics are "wrong"?

optimize query

Rich, September 04, 2020 - 6:09 am UTC

Can you please suggest better way to optimize the query with analytic function ?

exec :start_time := '25-AUG-20 10:55:00.000000000 PM';
exec :end_time := '25-AUG-20 11:00:00.000000000 PM';

select /*+ PARALLEL(4) */
sq.ask_price as "ask_price",
sq.ask_spread as "ask_spread",
sq.ask_yield as "ask_yield",
sq.ask_size as "ask_size",
sq.bid_price as "bid_price",
sq.bid_spread as "bid_spread",
sq.bid_yield as "bid_yield",
sq.bid_size as "bid_size",
sq.quote_time as "trade_time",
sq.omega_entity_id as "instrument_id",
sq.omega_source as "source_name",
sq.quote_source as "trade_source"
from
sup.supplier_quote sq
join (
SELECT
t.omega_entity_id,
max(quote_time) mqt
FROM
sup.supplier_quote t
where quote_time >= to_timestamp(:start_time)  AND quote_time < to_timestamp(:end_time)
group by omega_entity_id
) dc on sq.omega_entity_id = dc.omega_entity_id and sq.quote_time = dc.mqt
;


88690 rows selected.

Elapsed: 00:00:11.55

Execution Plan
----------------------------------------------------------
Plan hash value: 2079995162

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                            | Name                  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                     |                       |   844K|    94M|       |  3266K  (1)| 00:02:08 |       |       |        |      |   |
|   1 |  PX COORDINATOR                                      |                       |       |       |       |            |          |       |       |        |      |   |
|   2 |   PX SEND QC (RANDOM)                                | :TQ10001              |   844K|    94M|       |  3266K  (1)| 00:02:08 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   3 |    NESTED LOOPS                                      |                       |   844K|    94M|       |  3266K  (1)| 00:02:08 |       |       |  Q1,01 | PCWP |   |
|   4 |     NESTED LOOPS                                     |                       |    16M|    94M|       |  3266K  (1)| 00:02:08 |       |       |  Q1,01 | PCWP |   |
|   5 |      VIEW                                            |                       |   844K|    20M|       |  1858K  (1)| 00:01:13 |       |       |  Q1,01 | PCWP |   |
|   6 |       HASH GROUP BY                                  |                       |   844K|    14M|   808M|  1858K  (1)| 00:01:13 |       |       |  Q1,01 | PCWP |   |
|   7 |        PX RECEIVE                                    |                       |   844K|    14M|       |  1858K  (1)| 00:01:13 |       |       |  Q1,01 | PCWP |   |
|   8 |         PX SEND HASH                                 | :TQ10000              |   844K|    14M|       |  1858K  (1)| 00:01:13 |       |       |  Q1,00 | P->P | HASH       |
|   9 |          HASH GROUP BY                               |                       |   844K|    14M|   808M|  1858K  (1)| 00:01:13 |       |       |  Q1,00 | PCWP |   |
|* 10 |           FILTER                                     |                       |       |       |       |            |          |       |       |  Q1,00 | PCWC |   |
|  11 |            PX PARTITION RANGE ITERATOR               |                       |    30M|   516M|       |  1842K  (1)| 00:01:12 |   KEY |   KEY |  Q1,00 | PCWC |   |
|  12 |             TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SUPPLIER_QUOTE          |    30M|   516M|       |  1842K  (1)| 00:01:12 |   KEY |   KEY |  Q1,00 | PCWP |   |
|* 13 |              INDEX RANGE SCAN                        | SUPPLIER_QUOTE_TIME_IDX |    54M|       |       | 51908   (1)| 00:00:03 |   KEY |   KEY |  Q1,00 | PCWP |   |
|  14 |      PARTITION RANGE ITERATOR                        |                       |    20 |       |       |     1   (0)| 00:00:01 |   KEY |   KEY |  Q1,01 | PCWP |   |
|* 15 |       INDEX RANGE SCAN                               | SUPPLIER_QUOTE_TIME_IDX |    20 |       |       |     1   (0)| 00:00:01 |   KEY |   KEY |  Q1,01 | PCWP |   |
|* 16 |     TABLE ACCESS BY LOCAL INDEX ROWID                | SUPPLIER_QUOTE          |     1 |    91 |       |     2   (0)| 00:00:01 |     1 |     1 |  Q1,01 | PCWP |   |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - filter(TO_TIMESTAMP(:END_TIME)>TO_TIMESTAMP(:START_TIME))
  13 - access("QUOTE_TIME">=TO_TIMESTAMP(:START_TIME) AND "QUOTE_TIME"<TO_TIMESTAMP(:END_TIME))
  15 - access("VQ"."QUOTE_TIME"="DC"."MQT")
  16 - filter("VQ"."OMEGA_ENTITY_ID"="DC"."OMEGA_ENTITY_ID")

Note
-----
   - Degree of Parallelism is 4 because of hint


Statistics
----------------------------------------------------------
         76  recursive calls
          2  db block gets
    5349537  consistent gets
          1  physical reads
       2728  redo size
    3589392  bytes sent via SQL*Net to client
       2511  bytes received via SQL*Net from client
        175  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      88690  rows processed


Connor McDonald
September 07, 2020 - 2:26 am UTC

Try this

select *
from (
select /*+ PARALLEL(4) */
  sq.ask_price as "ask_price",
  sq.ask_spread as "ask_spread",
  sq.ask_yield as "ask_yield",
  sq.ask_size as "ask_size",
  sq.bid_price as "bid_price",
  sq.bid_spread as "bid_spread",
  sq.bid_yield as "bid_yield",
  sq.bid_size as "bid_size",
  sq.quote_time as "trade_time",
  sq.omega_entity_id as "instrument_id",
  sq.omega_source as "source_name",
  sq.quote_source as "trade_source",
  max(quote_time) over ( partition by group by omega_entity_id ) mqt 
from
  sup.supplier_quote sq
where quote_time >= to_timestamp(:start_time)  AND quote_time < to_timestamp(:end_time)
)
where "trade_time" = mqt


But also, thats a LOT of rows to be getting with an index. You might end up with some benefit just scanning the partitions fully (although we don't know what your partition key is)

Query optimization

A reader, September 09, 2020 - 2:01 am UTC

Hi Connor,

Thank you.

Here is some additional details.

QUOTE_TIME column is the partition key which is TIMESTAMP datatype.

Application pulls the data from this table every 5 min (where quote_time >= to_timestamp(:start_time)  AND quote_time < to_timestamp(:end_time))

Out of 148 million rows in the partition, 1.6 million get selected based on the time range. Finally, around 88690 rows gets returned.


I have index on QUOTE_TIME (partition key column). If i create the index in descending order, Would that help oracle to scan last 5 minutes of eligible rows quickly from index ?

Example: 

START_TIME: 18-AUG-20 02:25:00.000000000 PM
END_TIME:   18-AUG-20 02:30:00.000000000 PM


SQL> select /*+ PARALLEL(a,32) */  count(*) from fmc.SUPPLIER_QUOTE partition(V_QUOTE_200819) a ;

            COUNT(*)
--------------------
           148850075


SQL>  select /*+ PARALLEL(a,32) */  count(*) from fmc.SUPPLIER_QUOTE partition(V_QUOTE_200819) a where quote_time >= to_timestamp(:start_time)  AND quote_time < to_timestamp(:end_time)
  2  /

            COUNT(*)
--------------------
             1680313


SQL>

With index scan:
================

select *
from (
select /*+ PARALLEL(4) */
  sq.ask_price as "ask_price",
  sq.ask_spread as "ask_spread",
  sq.ask_yield as "ask_yield",
  sq.ask_size as "ask_size",
  sq.bid_price as "bid_price",
  sq.bid_spread as "bid_spread",
  sq.bid_yield as "bid_yield",
  sq.bid_size as "bid_size",
  sq.quote_time as "trade_time",
  sq.omega_entity_id as "instrument_id",
  sq.omega_source as "source_name",
  sq.quote_source as "trade_source",
  max(quote_time) over ( partition by omega_entity_id ) mqt
from
  fmc.SUPPLIER_QUOTE sq
where quote_time >= to_timestamp(:start_time)  AND quote_time < to_timestamp(:end_time)
)
where "trade_time" = mqt
/

...
88690 rows selected.

Elapsed: 00:00:09.68

Execution Plan
----------------------------------------------------------
Plan hash value: 3850659771

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                         | Name                  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                  |                       |    30M|  6538M|       |  2143K  (1)| 00:01:24 |       |       |        |      |           |
|*  1 |  PX COORDINATOR                                   |                       |       |       |       |            |          |       |       |        |      |           |
|   2 |   PX SEND QC (RANDOM)                             | :TQ10001              |    30M|  6538M|       |  2143K  (1)| 00:01:24 |       |       |  Q1,01 | P->S | QC (RAND) |
|*  3 |    VIEW                                           |                       |    30M|  6538M|       |  2143K  (1)| 00:01:24 |       |       |  Q1,01 | PCWP |           |
|   4 |     WINDOW SORT                                   |                       |    30M|  2680M|  3548M|  2143K  (1)| 00:01:24 |       |       |  Q1,01 | PCWP |           |
|   5 |      PX RECEIVE                                   |                       |    30M|  2680M|       |  1975K  (1)| 00:01:18 |       |       |  Q1,01 | PCWP |           |
|   6 |       PX SEND HASH                                | :TQ10000              |    30M|  2680M|       |  1975K  (1)| 00:01:18 |       |       |  Q1,00 | P->P | HASH      |
|*  7 |        FILTER                                     |                       |       |       |       |            |          |       |       |  Q1,00 | PCWC |           |
|   8 |         PX PARTITION RANGE ITERATOR               |                       |    30M|  2680M|       |  1975K  (1)| 00:01:18 |   KEY |   KEY |  Q1,00 | PCWC |           |
|   9 |          TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SUPPLIER_QUOTE        |    30M|  2680M|       |  1975K  (1)| 00:01:18 |   KEY |   KEY |  Q1,00 | PCWP |           |
|* 10 |           INDEX RANGE SCAN                        | SUPPLIER_QUOTE_TIME_IDX|   55M|       |       | 54678   (1)| 00:00:03 |   KEY |   KEY |  Q1,00 | PCWP |           |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_TIMESTAMP(:END_TIME)>TO_TIMESTAMP(:START_TIME))
   3 - filter("trade_time"="MQT")
   7 - filter(TO_TIMESTAMP(:END_TIME)>TO_TIMESTAMP(:START_TIME))
  10 - access("QUOTE_TIME">=TO_TIMESTAMP(:START_TIME) AND "QUOTE_TIME"<TO_TIMESTAMP(:END_TIME))

Note
-----
   - Degree of Parallelism is 4 because of hint


Statistics
----------------------------------------------------------
         35  recursive calls
          0  db block gets
     198237  consistent gets
          0  physical reads
         88  redo size
    5420860  bytes sent via SQL*Net to client
      65640  bytes received via SQL*Net from client
       5914  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
      88690  rows processed

SQL>


With Parallel full table scan:
==============================


88690 rows selected.

Elapsed: 00:00:10.18

Execution Plan
----------------------------------------------------------
Plan hash value: 2586420883

--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |              |    30M|  6538M|       |    11M  (1)| 00:07:25 |       |       |        |      |            |
|*  1 |  PX COORDINATOR                    |              |       |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10001     |    30M|  6538M|       |    11M  (1)| 00:07:25 |       |       |  Q1,01 | P->S | QC (RAND)  |
|*  3 |    VIEW                            |              |    30M|  6538M|       |    11M  (1)| 00:07:25 |       |       |  Q1,01 | PCWP |            |
|   4 |     WINDOW SORT                    |              |    30M|  2680M|  3548M|    11M  (1)| 00:07:25 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE                    |              |    30M|  2680M|       |    11M  (1)| 00:07:19 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH                 | :TQ10000     |    30M|  2680M|       |    11M  (1)| 00:07:19 |       |       |  Q1,00 | P->P | HASH       |
|*  7 |        FILTER                      |              |       |       |       |            |          |       |       |  Q1,00 | PCWC |            |
|   8 |         PX PARTITION RANGE ITERATOR|              |    30M|  2680M|       |    11M  (1)| 00:07:19 |   KEY |   KEY |  Q1,00 | PCWC |            |
|*  9 |          TABLE ACCESS STORAGE FULL | SUPPLIER_QUOTE |  30M|  2680M|       |    11M  (1)| 00:07:19 |   KEY |   KEY |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_TIMESTAMP(:END_TIME)>TO_TIMESTAMP(:START_TIME))
   3 - filter("trade_time"="MQT")
   7 - filter(TO_TIMESTAMP(:END_TIME)>TO_TIMESTAMP(:START_TIME))
   9 - storage("QUOTE_TIME">=TO_TIMESTAMP(:START_TIME) AND "QUOTE_TIME"<TO_TIMESTAMP(:END_TIME))
       filter("QUOTE_TIME">=TO_TIMESTAMP(:START_TIME) AND "QUOTE_TIME"<TO_TIMESTAMP(:END_TIME))

Note
-----
   - Degree of Parallelism is 4 because of hint


Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
    2955977  consistent gets
    2955963  physical reads
          0  redo size
    5420670  bytes sent via SQL*Net to client
      65640  bytes received via SQL*Net from client
       5914  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
      88690  rows processed


Connor McDonald
September 09, 2020 - 2:56 am UTC

If i create the index in descending order, Would that help oracle to scan last 5 minutes of eligible rows quickly from index

Unlikely because you still in effect sorting not the entire set as one, but by omega_entity_id.

If you're on 18c or above, you could consider more granular partitions (not that we know what your current one is .... ) and then merge them over time to keep partition numbers under control.

An example of that here

https://connor-mcdonald.com/2018/07/25/hyper-partitioned-index-avoidance-thingamajig/

Rich, September 09, 2020 - 3:30 am UTC

Thanks Connor

QUOTE_TIME is range partitioned (daily) on timestamp column.

I will look into Hyperpartitioned index implementation when we upgrade to 19c.

We have similar kind of queries application pulling data in 5 min interval from daily range partition table,

optimize update query

A reader, November 14, 2022 - 4:47 am UTC

Greetings!

Is there any better way to rewrite this UPDATE query .?

EP.F_PRC table is the composite partitioned table in which,

snp_dt is the partition key column
inst_id is the subpartition key column.



UPDATE EP.F_PRC fpa
                      SET (GIVEN_TYPE, LATEST_COMMENT, LATEST_COMMENT_DATETIME, TERM_BY_YEAR, COUPON_BUCKET, POOL_OF_POOLS, STATE_100PCT_NAME, REASON_CODE) = 
                      (SELECT fp.GIVEN_TYPE,
CASE WHEN LATEST_COMMENT_DATETIME <= to_date('10/10/2022', 'mm/dd/yyyy') THEN fp.LATEST_COMMENT ELSE (select latest_comment
from EP.F_PRC WHERE trunc(snp_date) = to_date('10/07/2022', 'mm/dd/yyyy') and instr_id=fpa.instr_id FETCH NEXT 1 ROWS ONLY) END AS "Latest_Comment", 
CASE WHEN LATEST_COMMENT_DATETIME <= to_date('10/10/2022', 'mm/dd/yyyy') THEN LATEST_COMMENT_DATETIME ELSE (select LATEST_COMMENT_DATETIME
from EP.F_PRC WHERE trunc(snp_date) = to_date('10/07/2022', 'mm/dd/yyyy') and instr_id=fpa.instr_id FETCH NEXT 1 ROWS ONLY) END "Latest_Comment_Date", 
 TERM_BY_YEAR, COUPON_BUCKET, POOL_OF_POOLS, STATE_100PCT_NAME, REASON_CODE
FROM EP.F_PRC fp,  OP.batchjob op
WHERE fp.snp_date = to_date('10/13/2022', 'mm/dd/yyyy') and fp.snp_id = op.snp_id AND op.batch_name IN ('MBS_1500NY_Pricing','MBS_1600NY_Pricing') and fp.instr_id = fpa.instr_id FETCH NEXT 1 ROWS ONLY)
WHERE fpa.snp_date BETWEEN to_date('10/10/2022', 'mm/dd/yyyy') AND to_date('10/11/2022', 'mm/dd/yyyy')
and fpa.snp_id IN (select snp_id from OP.batchjob WHERE batch_name IN ('MBS_1500NY_Pricing','MBS_1600NY_Pricing'))
AND fpa.instr_id IN (select instr_id from EP.instrument_temp);



Thanks!

Chris Saxon
November 14, 2022 - 1:45 pm UTC

Is there any better way to rewrite this UPDATE query .?

Use consistent formatting so it's easier to read? ;)

In seriousness, without knowing

- What the query is trying to do (in business terms)
- It's execution plan

It's hard to suggest changes/improvements. You can get the plan by running:

set serveroutput off
alter session set statistics_level = all;

update ... ;

select * 
from   table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));

optimize update query

Rajeshwaran, Jeyabal, November 15, 2022 - 6:01 am UTC

Have formatted the above code for our friend and it looks like this
UPDATE ep.f_prc fpa
SET
    ( given_type,
      latest_comment,
      latest_comment_datetime,
      term_by_year,
      coupon_bucket,
      pool_of_pools,
      state_100pct_name,
      reason_code ) = (
        SELECT fp.given_type,
            CASE WHEN latest_comment_datetime <= TO_DATE('10/10/2022', 'mm/dd/yyyy') THEN
                    fp.latest_comment
                ELSE
                    ( SELECT latest_comment
                        FROM ep.f_prc
                        WHERE trunc(snp_date) = TO_DATE('10/07/2022', 'mm/dd/yyyy')
                        AND instr_id = fpa.instr_id
                        FETCH NEXT 1 ROWS ONLY
                    )
            END AS "Latest_Comment",
            CASE WHEN latest_comment_datetime <= TO_DATE('10/10/2022', 'mm/dd/yyyy') THEN
                    latest_comment_datetime
                ELSE
                    ( SELECT latest_comment_datetime
                        FROM ep.f_prc
                        WHERE trunc(snp_date) = TO_DATE('10/07/2022', 'mm/dd/yyyy')
                         AND instr_id = fpa.instr_id
                        FETCH NEXT 1 ROWS ONLY )
            END "Latest_Comment_Date",
            term_by_year,
            coupon_bucket,
            pool_of_pools,
            state_100pct_name,
            reason_code
        FROM ep.f_prc    fp,
             op.batchjob op
        WHERE fp.snp_date = TO_DATE('10/13/2022', 'mm/dd/yyyy')
  AND fp.snp_id = op.snp_id
  AND op.batch_name IN ( 'MBS_1500NY_Pricing', 'MBS_1600NY_Pricing' )
  AND fp.instr_id = fpa.instr_id
        FETCH NEXT 1 ROWS ONLY
    )
WHERE
    fpa.snp_date BETWEEN TO_DATE('10/10/2022', 'mm/dd/yyyy') AND TO_DATE('10/11/2022', 'mm/dd/yyyy')
    AND fpa.snp_id IN ( SELECT snp_id
        FROM op.batchjob
        WHERE batch_name IN ( 'MBS_1500NY_Pricing', 'MBS_1600NY_Pricing' )
  )
    AND fpa.instr_id IN ( SELECT instr_id FROM ep.instrument_temp );


the driving factor is "fp.instr_id = fpa.instr_id" - which mean for each row in the outer table we run this correlated subquery once to provide the value for set clause.

which i think can be turned into a big bulk operation like this
merge into ep.f_prc fpa using ( select * from ( select fp.instr_id ,
                 fp.given_type,
                 case when fp.latest_comment_datetime <= TO_DATE('10/10/2022', 'mm/dd/yyyy') then 
                    fp.latest_comment else t2.latest_comment end as latest_comment ,
                 case when fp.latest_comment_datetime <= TO_DATE('10/10/2022', 'mm/dd/yyyy') then 
                    fp.latest_comment_datetime else t2.latest_comment_datetime end as latest_comment_datetime,
                    fp.term_by_year,
                    fp.coupon_bucket,
                    fp.pool_of_pools,
                    fp.state_100pct_name,
                    fp.reason_code,
                    row_number() over(partition by fp.instr_id order by fp.latest_comment_datetime ) rn 
            from ep.f_prc fp,
                 op.batchjob op,
                 ( select instr_id,
                        min(latest_comment) keep(dense_rank 
                                first order by latest_comment_datetime desc) as latest_comment ,
                        max(latest_comment_datetime) as latest_comment_datetime
                  from ep.f_prc
                  where snp_date >= TO_DATE('10/07/2022', 'mm/dd/yyyy')
                  and snp_date < TO_DATE('10/08/2022', 'mm/dd/yyyy') 
                  group by instr_id ) t2
            where fp.snp_date = TO_DATE('10/13/2022', 'mm/dd/yyyy')
            and fp.snp_id = op.snp_id
            and op.batch_name IN ( 'MBS_1500NY_Pricing', 'MBS_1600NY_Pricing' )
            and fp.instr_id = t2.instr_id (+) )
            where rn = 1 ) t3
on (fpa.instr_id = t3.instr_id )
when matched then 
update set 
    fpa.given_type = t3.given_type ,
    fpa.latest_comment = t3.latest_comment ,
    fpa.latest_comment_datetime = t3.latest_comment_datetime,
    fpa.term_by_year = t3.term_by_year ,
    fpa.coupon_bucket = t3.coupon_bucket,
    fpa.pool_of_pools = t3.pool_of_pools,
    fpa.state_100pct_name = t3.state_100pct_name ,
    fpa.reason_code = t3.reason_code ;

so understand the above code and fix if any syntax or typo there - have it tested in your environment and let us know how it goes.
Connor McDonald
November 15, 2022 - 7:12 am UTC

nice stuff

Optimize update query

Pat, November 16, 2022 - 4:04 am UTC

Thank you for the revised query.

Query optimization

John, July 24, 2023 - 10:13 pm UTC

Greetings!
We wanted to improve the response time for this query. Just wondering, if this query could be rewritten or any new/current index could be created in more efficient way to speed up the repsonse time.

Below tables are subpartition tables (RANGE partitioned by eff_dt_start and subpartitioned (HASH) by instrument_id)
TABLE NAME: FF_INC_PRC_BT
TABLE NAME: F_PRC_ASSUMP_BT

********************************************************************************

SELECT ds.SNAP_DATE, ds.NAME, ds.TIME, dpt.PRICE_TYPE,
fip.BID_PRICE, fip.MID_PRICE, fip.ASK_PRICE,
fip.ACCRUED_INTEREST, fip.BID_YIELD, fip.MID_YIELD,
fip.ASK_YIELD, fip.BID_SPREAD, fip.MID_SPREAD, fip.ASK_SPREAD,
fip.PRICE_CURRENCY, fip.BID_SPREAD_CURVE, fpa.AVERAGE_LIFE_DATE,
fip.CURVE_YIELD, fip.DERIVATION_OF_PRICE, fpa.PRICE_TO_DATE,
fpa.BENCHMARK_ID, fpa.BENCHMARK_NAME, fip.SPREAD_TO_BENCHMARK,
fpa.ANT_REDEMPTION_DATE, fpa.ANT_REDEMPTION_PRICE, fpa.AVERAGE_LIFE_VOLATILITY, fpa.RELEASED_DEMINIMIS_VALUES
FROM EODP.FF_INC_PRC_BT fip
INNER JOIN EODP.F_PRC_ASSUMP_BT fpa
ON fip.EFF_DT_START >= '01-JAN-2021' AND fip.EFF_DT_START < '01-JUL-2023'
AND fip.INSTRUMENT_ID = 40030133
AND fip.recorded_date_end > sys_extract_utc(SYSTIMESTAMP)
AND fip.EFF_DT_START = fpa.EFF_DT_START
AND fip.INSTRUMENT_ID = fpa.INSTRUMENT_ID
AND fip.ASSUMP_ID = fpa.ASSUMP_ID
AND fpa.recorded_date_end > sys_extract_utc(SYSTIMESTAMP)
INNER JOIN EODP.DIM_SNAP ds
ON ds.SNAP_ID = fip.SNAP_ID
INNER JOIN EODP.DIM_PRICE_TYPE dpt
ON dpt.PRICE_TYPE_ID = fip.PRICE_TYPE_ID
order by ds.snap_date desc

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.70       0.64          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       85      1.75       5.55       9811      11573          0        1258
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       87      2.46       6.20       9811      11573          0        1258

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 108
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
      1258       1258       1258  SORT ORDER BY (cr=11586 pr=9812 pw=0 time=5560339 us starts=1 cost=1159 size=47760 card=240)
      1258       1258       1258   HASH JOIN  (cr=11586 pr=9812 pw=0 time=5558834 us starts=1 cost=1158 size=47760 card=240)
      1258       1258       1258    JOIN FILTER CREATE :BF0000 (cr=11200 pr=9811 pw=0 time=5543001 us starts=1 cost=1053 size=40320 card=240)
      1258       1258       1258     HASH JOIN  (cr=11200 pr=9811 pw=0 time=5542840 us starts=1 cost=1053 size=40320 card=240)
         6          6          6      VIEW  index$_join$_006 (cr=8 pr=0 pw=0 time=1417 us starts=1 cost=2 size=126 card=6)
         6          6          6       HASH JOIN  (cr=8 pr=0 pw=0 time=1414 us starts=1)
         6          6          6        INDEX STORAGE FAST FULL SCAN DIM_PRICE_TYPE_PK (cr=4 pr=0 pw=0 time=499 us starts=1 cost=1 size=126 card=6)(object id 281322)
         6          6          6        INDEX STORAGE FAST FULL SCAN DIM_PRICE_TYPE_UK1 (cr=4 pr=0 pw=0 time=503 us starts=1 cost=1 size=126 card=6)(object id 282128)
      1258       1258       1258      NESTED LOOPS  (cr=11192 pr=9811 pw=0 time=5536572 us starts=1 cost=1051 size=35280 card=240)
      1258       1258       1258       NESTED LOOPS  (cr=7226 pr=6362 pw=0 time=3577762 us starts=1 cost=1051 size=35280 card=302)
      1258       1258       1258        PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=4579 pr=4036 pw=0 time=2120684 us starts=1 cost=493 size=27784 card=302)
      1258       1258       1258         PARTITION HASH SINGLE PARTITION: 1 1 (cr=4579 pr=4036 pw=0 time=2120326 us starts=131 cost=493 size=27784 card=302)
      1258       1258       1258          TABLE ACCESS BY LOCAL INDEX ROWID BATCHED FF_INC_PRC_BT PARTITION:   (cr=4579 pr=4036 pw=0 time=2119600 us starts=131 cost=493 size=27784 card=302)
      1258       1258       1258           INDEX RANGE SCAN FF_INCOME_PRC_BT_IDX1 PARTITION:   (cr=397 pr=363 pw=0 time=240073 us starts=131 cost=263 size=0 card=1)(object id 48847130)
      1258       1258       1258        PARTITION RANGE AND PARTITION: KEY(AP) KEY(AP) (cr=2647 pr=2326 pw=0 time=1455825 us starts=1258 cost=1 size=0 card=1)
      1258       1258       1258         PARTITION HASH SINGLE PARTITION: 1 1 (cr=2647 pr=2326 pw=0 time=1447733 us starts=1258 cost=1 size=0 card=1)
      1258       1258       1258          INDEX UNIQUE SCAN F_PRC_ASSUMP_BT_PK PARTITION: KEY KEY (cr=2647 pr=2326 pw=0 time=1444285 us starts=1258 cost=1 size=0 card=1)(object id 48743085)
      1258       1258       1258       TABLE ACCESS BY LOCAL INDEX ROWID F_PRC_ASSUMP_BT PARTITION: 1 1 (cr=3966 pr=3449 pw=0 time=1957210 us starts=1258 cost=2 size=55 card=1)
      1321       1321       1321    JOIN FILTER USE :BF0000 (cr=373 pr=0 pw=0 time=10094 us starts=1 cost=105 size=1098919 card=35449)
      1321       1321       1321     TABLE ACCESS STORAGE FULL DIM_SNAP (cr=373 pr=0 pw=0 time=10062 us starts=1 cost=105 size=1098919 card=35449)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                       51        0.00          0.00
  IPC group service call                         10        0.00          0.00
  library cache lock                             21        0.00          0.00
  library cache pin                              21        0.00          0.00
  SQL*Net message to client                      85        0.00          0.00
  enq: JZ - Join group dictionary                 1        0.00          0.00
  gc current block 3-way                        187        0.00          0.04
  PGA memory operation                           11        0.00          0.00
  gc cr grant 2-way                            7448        0.00          0.82
  cell single block physical read              9061        0.01          3.40
  gc current grant 2-way                         87        0.00          0.00
  gc cr multi block grant                        78        0.00          0.01
  cell list of blocks physical read              78        0.00          0.09
  gc current block 2-way                        130        0.00          0.01
  gc cr multi block mixed                        20        0.00          0.00
  SQL*Net message from client                    85        8.72          8.77
********************************************************************************

TABLE NAME: FF_INC_PRC_BT

INDEX_OWNER     INDEX_NAME                               COLUMN_NAME                    COLUMN_POSITION
--------------- ---------------------------------------- ------------------------------ ---------------
EODP            FF_INCOME_PRC_BT_IDX1                    INSTRUMENT_ID                                1
EODP            FF_INCOME_PRC_BT_IDX1                    EFFECTIVE_DATE_START                         2
EODP         FF_INCOME_PRC_BT_IDX1                    ASSUMP_ID                                    3
EODP         FF_INCOME_PRC_BT_UK1                     SNAP_ID                                      1
EODP         FF_INCOME_PRC_BT_UK1                     INSTRUMENT_ID                                2
EODP         FF_INCOME_PRC_BT_UK1                     PRICE_TYPE_ID                                3
EODP         FF_INCOME_PRC_BT_UK1                     RECORDED_DATE_START                          4
EODP         FF_INCOME_PRC_BT_UK1                     EFFECTIVE_DATE_START                         5
EODP         FF_INCOME_PRICE_BT_PK                    PRICE_ID                                     1
EODP         FF_INCOME_PRICE_BT_PK                    EFFECTIVE_DATE_START                         2
EODP         FF_INCOME_PRICE_BT_PK                    INSTRUMENT_ID                                3


TABLE NAME: F_PRC_ASSUMP_BT

INDEX_OWNER     INDEX_NAME                               COLUMN_NAME                    COLUMN_POSITION
--------------- ---------------------------------------- ------------------------------ ---------------
EODP         F_PRC_ASSUMP_BT_PK                    ASSUMP_ID                                    1
EODP         F_PRC_ASSUMP_BT_PK                    EFFECTIVE_DATE_START                         2
EODP         F_PRC_ASSUMP_BT_PK                    INSTRUMENT_ID                                3
EODP         F_PRC_ASSUMP_BT_UK1                   SNAP_ID                                      1
EODP         F_PRC_ASSUMP_BT_UK1                   INSTRUMENT_ID                                2
EODP         F_PRC_ASSUMP_BT_UK1                   RECORDED_DATE_START                          3
EODP         F_PRC_ASSUMP_BT_UK1                   EFFECTIVE_DATE_START                         4



Connor McDonald
August 25, 2023 - 4:54 am UTC

Well here's a quick initial step

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.70       0.64          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       85      1.75       5.55       9811      11573          0        1258
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       87      2.46       6.20       9811      11573          0        1258

cell single block physical read              9061        0.01          3.40
SQL*Net message from client                    85        8.72          8.77



There's a good chance a chunk of that 8.7 seconds is between the fetch calls, so push the fetch size up to (say) 1000 or 2000 which will reduce the fetch calls down to 1 or 2

Query takes lot of temp space and not completing

Wil, August 30, 2024 - 4:14 am UTC

Can you please provide recommendation on rewriting this query in optimized way.?
This query uses 'rollup group by' which is using lot of TEMP space (crossed 1 TB of temp space and ran out of temp tablespace) and not completing it.

SQL monitor report is attached below. thanks.

 SQL_FULLTEXT
              -----------------------------------------------------------------------------
     SELECT type, COUNT( eventid ) AS selectedEventCategoryCount FROM (  SELECT 
     INSTRUMENT_ID, EVENT_ID, eventId, eventRevisionId, actionType, announcementDate, 
     announcementStatus, countryOfIncCode, cusip, declarationDate, earlyDeadlineExpiryDate, 
     effectiveDate, entryDate, status, subtype, type, expirationDate, homeMarketCode, 
     homeMarketCountry, instrumentName, securityType, isin, recordDate, relatedEventId, 
     ticker, sedol, sedols, OFFERED_COMPANY_ID, OFFERING_COMPANY_ID, dividendStatus, marketExDate, 
     marketPaymentDate, marketEffectiveDate, marketsOptionNumber
              , selectedEventCategoryCount 
     FROM (SELECT CAM.*,  DIVIDEND.STATUS
               as dividendStatus,
              MARKETS.EX_DATE as marketExDate,
              MARKETS.PAYMENT_DATE as marketPaymentDate,
              MARKETS.MARKET_EFFECTIVE_DATE as marketEffectiveDate,
              MARKETS.OPTION_NUMBER as marketsOptionNumber,  0 as selectedEventCategoryCount 
     FROM ( SELECT ROW_NUMBER() OVER(PARTITION BY CAM.EVENT_ID 
            ORDER BY CAM.EVENT_REVISION_ID DESC) as rowNumber,  CAM.INSTRUMENT_ID,
              CAM.EVENT_ID,
              CAM.EVENT_ID as eventId,
              CAM.EVENT_REVISION_ID as eventRevisionId,
              CAM.ACTION_TYPE as actionType,
              CAM.ANNOUNCEMENT_DATE as announcementDate,
              CAM.ANNOUNCEMENT_STATUS as announcementStatus,
              CAM.COUNTRY_OF_INC_CODE as countryOfIncCode,
              CAM.CUSIP as cusip,
              CAM.DECLARATION_DATE as declarationDate,
              CAM.EARLY_DEADLINE_EXPIRY_DATE as earlyDeadlineExpiryDate,
              CAM.EFFECTIVE_DATE as effectiveDate,
              CAM.ENTRY_DATE as entryDate,
              CAM.EVENT_STATUS as status,
              CAM.EVENT_SUBTYPE as subtype,
              CAM.EVENT_TYPE as type,
              CAM.EXPIRATION_DATE as expirationDate,
              CAM.HOME_MARKET_CODE as homeMarketCode,
              CAM.HOME_MARKET_COUNTRY as homeMarketCountry,
              CAM.INSTRUMENT_NAME as instrumentName,
              CAM.SECURITY_TYPE as securityType,
              CAM.ISIN as isin,
              CAM.RECORD_DATE as recordDate,
              CAM.RELATED_EVENT_ID as relatedEventId,
              CAM.TICKER as ticker,
              CAM.SEDOL as sedol,
              CAM.SEDOLS as sedols,
              CAM.OFFERED_COMPANY_ID,
              CAM.OFFERING_COMPANY_ID FROM CA.CAM WHERE
              CAM.EVENT_ID > 0  AND CAM.EVENT_STATUS !=  :1   )
              CAM  LEFT JOIN CA.DIVIDEND ON CAM.EVENT_ID
               = DIVIDEND.EVENT_ID AND DIVIDEND.STATUS IS NOT NULL
              LEFT JOIN CA.MARKETS ON MARKETS.EVENT_ID = CAM.EVENT_ID )
               rollup GROUP BY INSTRUMENT_ID, EVENT_ID, eventId, eventRevisionId, actionType,
              announcementDate, announcementStatus, countryOfIncCode, cusip, declarationDate,
              earlyDeadlineExpiryDate, effectiveDate, entryDate, status, subtype, type, 
     expirationDate, homeMarketCode, homeMarketCountry, instrumentName, securityType, isin,
               recordDate, relatedEventId, ticker, sedol, sedols, OFFERED_COMPANY_ID, OFFERING_COMPANY_ID, 
      dividendStatus, marketExDate, marketPaymentDate, marketEffectiveDate, marketsOptionNumber, 
      selectedEventCategoryCount)  
      GROUP BY type


Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  4
 Session             :  SVC_QUERY (2819:27773)
 SQL ID              :  5r16uhcbnxn2s
 SQL Execution ID    :  67108865
 Execution Started   :  08/27/2024 08:51:46
 First Refresh Time  :  08/27/2024 08:51:50
 Last Refresh Time   :  08/27/2024 20:33:19
 Duration            :  42095s
 Module/Action       :  list-service
 Service             :  READ_SVC
 Program             :  list-service

Binds
========================================================================================================================
| Name | Position |     Type      |                                       Value                                        |
========================================================================================================================
| :1   |        1 | VARCHAR2(128) | Deleted Announcement                                                               |
========================================================================================================================

Global Stats
=======================================================================================================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency | Cluster  |  Other   | Buffer | Read | Read  | Write | Write | Uncompressed |  Offload   |    Offload     |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes | Reqs  | Bytes |    Bytes     | Elig Bytes | Returned Bytes |
=======================================================================================================================================================================
|   42096 |    6233 |    10467 |        0.00 |        3041 |     7275 |    15080 |    78M | 128K |  90GB |  973K |   1TB |         25GB |       71GB |        1TB |
=======================================================================================================================================================================

SQL Plan Monitoring Details (Plan Hash Value=1566660356)
=======================================================================================================================================================================================================================================================
| Id   |            Operation             |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write |  Mem       | Temp  | Activity |                    Activity Detail                    | Progress |
|      |                                  |                    | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |            |       |   (%)    |                      (# samples)                      |          |
=======================================================================================================================================================================================================================================================
|    0 | SELECT STATEMENT                 |                    |         |      |     26087 | +15466 |     1 |        0 |       |       |       |       |            |       |    37.25 | gc buffer busy acquire (2976)                         |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |      |       |          | gc buffer busy release (128)                          |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current block 2-way (243)                          |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current block 3-way (590)                          |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current block busy (1461)                          |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current block congested (2)                        |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current grant 2-way (2)                            |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current grant 3-way (4)                            |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current grant busy (8)                             |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current request (7)                                |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | gc current retry (6)                                  |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |   |          | buffer busy waits (2043)                              |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | latch: cache buffers chains (2)                       |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | enq: SS - contention (533)                            |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | log file switch completion (1)                        |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | write complete waits (1)                              |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | Cpu (550)                                             |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | DFS lock handle (457)                                 |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | cell single block physical read: RDMA (1)             |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | enq: TS - contention (17)                             |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | ges message buffer allocation (13)                    |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | latch free (3)                                        |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | cell single block physical read: flash cache (1)      |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | local write wait (845)                                |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | read by other session (5)                             |          |
|    1 |   HASH GROUP BY                  |                    |      13 |  21M |     24923 | +16340 |     1 |        0 |       |       |       |       |            |       |     0.16 | ASM file metadata operation (3)                       |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | Sync ASM rebalance (40)                               |          |
|    2 |    VIEW                          | VM_NWVW_0          |    264M |  21M |           |        |     1 |          |       |       |       |       |            |       |          |                                                       |          |
| -> 3 |     HASH GROUP BY                |                    |    264M |  21M |     42026 |    +71 |     1 |        0 |       |       |  948K | 919GB |   2GB   | 926GB |    60.57 | statement suspended, wait error to be cleared (11045) |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |         |       |          | Cpu (796)                                             |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | ASM IO for non-blocking poll (7)                      |          |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | direct path write temp (4248)                         |          |
|    4 |      HASH JOIN RIGHT OUTER       |                    |    264M |   7M |     42051 |     +4 |     1 |       4G | 19568 |  19GB | 22055 |  21GB | 109MB      |  25GB |     2.02 | Cpu (501)                                             |      74% |
|      |                                  |                    |         |      |           |        |       |          |       |       |       |       |            |       |          | direct path read temp (37)                            |          |
|    5 |       TABLE ACCESS STORAGE FULL  | MARKETS            |    137M | 604K |        31 |     +4 |     1 |     137M | 17858 |  17GB |       |       |     .      |     . |          |                                                       |     100% |
|    6 |       HASH JOIN RIGHT OUTER      |                    |    150M |   4M |      8047 |    +34 |     1 |     165M |       |       |       |       |     .      |     . |          |                                                       |          |
|    7 |        TABLE ACCESS STORAGE FULL | DIVIDEND           |      5M | 113K |         4 |    +34 |     1 |       5M |  3275 |   3GB |  3665 |   4GB |     .      |     . |          |                                                       |          |
|    8 |        TABLE ACCESS STORAGE FULL | CAM                |    149M |   2M |      8044 |    +37 |     1 |     149M | 78156 |  51GB |       |       |     .      |     . |          |                                                       |     100% |
=======================================================================================================================================================================================================================================================






Chris Saxon
September 02, 2024 - 2:26 pm UTC

A few observations:

- This query uses 'rollup group by' - It has a subquery named rollup, which is grouped. This is different to group by rollup ( ... ).
- What is the purpose of the inner group by? There are no aggregations in the inner select. If you can remove this and just have the outer group by this should reduce the temp space needed significantly
- The joins produce 4 billion rows - is this correct? The input tables have ~100 million rows at most. Is it possible there's a join condition missing? E.g. between the dividend and markets tables.

Thanks for the inputs

Wil, September 02, 2024 - 5:58 pm UTC

Thanks. I will check the query along those lines and will get back to you.

Query rewrite suggestion

Wil, September 05, 2024 - 4:15 am UTC

Hi,
Below query is consuming more temp space and timing out and Is it possible to rewrite this query in better way.
Thanks!



SQL Text
------------------------------
SELECT
     type,
     COUNT(eventid) AS selectedeventcategorycount
 FROM
     (
         SELECT
             cam_outer.*
         FROM
             (
                 SELECT
                     ROW_NUMBER() OVER(
                         PARTITION BY cam.event_id
                         ORDER BY
                             cam.event_revision_id DESC
                     ) AS rownumber,
                     cam.instrument_id,
                     cam.event_id,
                     cam.event_id                     AS eventid,
                     cam.event_revision_id            AS eventrevisionid,
                     cam.action_type                  AS actiontype,
                     cam.announcement_date            AS announcementdate,
                     cam.announcement_status          AS announcementstatus,
                     cam.country_of_inc_code          AS countryofinccode,
                     cam.cusip                        AS cusip,
                     cam.declaration_date             AS declarationdate,
                     cam.early_deadline_expiry_date   AS earlydeadlineexpirydate,
                     cam.effective_date               AS effectivedate,
                     cam.entry_date                   AS entrydate,
                     cam.event_status                 AS status,
                     cam.event_subtype                AS subtype,
                     cam.event_type                   AS type,
                     cam.expiration_date              AS expirationdate,
                     cam.home_market_code             AS homemarketcode,
                     cam.home_market_country          AS homemarketcountry,
                     cam.instrument_name              AS instrumentname,
                     cam.security_type                AS securitytype,
                     cam.isin                         AS isin,
                     cam.record_date                  AS recorddate,
                     cam.related_event_id             AS relatedeventid,
                     cam.ticker                       AS ticker,
                     cam.sedol                        AS sedol,
                     cam.sedols                       AS sedols,
                     cam.offered_company_id,
                     cam.offering_company_id
                 FROM
                     ca.cam left
                     JOIN ca.dividend ON cam.event_id = dividend.event_id
                                         AND dividend.status IS NOT NULL
                     LEFT JOIN ca.markets ON markets.event_id = cam.event_id
                 WHERE
                     cam.event_id > 0
                     AND cam.event_status !=:1
             ) cam_outer
         WHERE
             cam_outer.rownumber = 1
     )
 GROUP BY
     type

Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  4
 Session             :  SVC_ACCT (2478:59963)
 SQL ID              :  9acnvmp1tygcg
 SQL Execution ID    :  67108871
 Execution Started   :  09/04/2024 21:30:09
 First Refresh Time  :  09/04/2024 21:30:15
 Last Refresh Time   :  09/04/2024 21:47:06
 Duration            :  1018s
 Module/Action       :  list-service/-
 Service             :  READ_SVC
 Program             :  list-service

Binds
========================================================================================================================
| Name | Position |     Type      |                                       Value                                        |
========================================================================================================================
| :1   |        1 | VARCHAR2(128) | Deleted Announcement                                                               |
========================================================================================================================

Global Stats
===============================================================================================================================================
| Elapsed |   Cpu   |    IO    | Application |  Other   | Buffer | Read  | Read  | Write | Write | Uncompressed |  Offload   |    Offload     |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) |  Gets  | Reqs  | Bytes | Reqs  | Bytes |    Bytes     | Elig Bytes | Returned Bytes |
===============================================================================================================================================
|    1018 |     640 |      375 |        0.00 |     2.62 |     3M | 19803 |  19GB | 75830 |  70GB |        395MB |       19GB |           70GB |
===============================================================================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3427044074)
================================================================================================================================================================================================================================
| Id   |             Operation              |        Name        |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write |  Mem  | Temp | Activity |            Activity Detail   | Progress |
|      |                                    |                    | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |       |      |   (%)    |         (# samples)          |          |
================================================================================================================================================================================================================================
|    0 | SELECT STATEMENT                   |                    |         |       |           |        |     1 |          |       |       |       |       | .     |     .|          |                              |          |
|    1 |   HASH GROUP BY                    |                    |      13 |    7M |           |        |     1 |          |       |       |       |       | .     |     .|          |                              |          |
|    2 |    VIEW                            |                    |    265M |    7M |           |        |     1 |          |       |       |       |       | .     |     .|          |                              |          |
| -> 3 |     WINDOW SORT PUSHED RANK        |                    |    265M |    7M |       991 |    +31 |     1 |        0 |       |       | 76255 |  70GB | 1MB   | 70GB |    91.35 | Cpu (561)                    |          |
|      |                                    |                    |         |       |           |        |       |          |       |       |       |       |       |      |          | direct path write temp (368) |          |
| -> 4 |      HASH JOIN RIGHT OUTER         |                    |    265M |    3M |      1020 |     +2 |     1 |     975M |       |       |       |       |   8GB | .    |     7.37 | Cpu (75)                     |          |
|    5 |       INDEX STORAGE FAST FULL SCAN | MARKETS_PK         |    138M | 48711 |        30 |     +1 |     1 |     138M |       |       |       |       | .     |    . |     0.49 | Cpu (5)                      |     100% |
| -> 6 |       HASH JOIN RIGHT OUTER        |                    |    151M |    2M |       992 |    +30 |     1 |      61M |       |       |       |       | 492MB | .    |     0.59 | Cpu (6)                      |          |
|    7 |        TABLE ACCESS STORAGE FULL   | DIVIDEND           |      5M |  113K |         9 |    +22 |     1 |       5M |  3271 |   3GB |       |       | .     |     .|     0.10 | Cpu (1)                      |          |
| -> 8 |        TABLE ACCESS STORAGE FULL   | CAM         |    149M       |    2M |       992 |    +30 |     1 |      48M | 16532 |  16GB |       |       |  10MB | .    |     0.10 | Cpu (1)                      |          |
================================================================================================================================================================================================================================

SQL>











Connor McDonald
September 17, 2024 - 4:30 am UTC

It might be my failing eye sight :-) but in this part

SELECT
                     ROW_NUMBER() OVER(
                         PARTITION BY cam.event_id
                         ORDER BY
                             cam.event_revision_id DESC
                     ) AS rownumber,
                     cam.instrument_id,
                     cam.event_id,
                     cam.event_id                     AS eventid,
                     cam.event_revision_id            AS eventrevisionid,
                     cam.action_type                  AS actiontype,
                     cam.announcement_date            AS announcementdate,
                     cam.announcement_status          AS announcementstatus,
                     cam.country_of_inc_code          AS countryofinccode,
                     cam.cusip                        AS cusip,
                     cam.declaration_date             AS declarationdate,
                     cam.early_deadline_expiry_date   AS earlydeadlineexpirydate,
                     cam.effective_date               AS effectivedate,
                     cam.entry_date                   AS entrydate,
                     cam.event_status                 AS status,
                     cam.event_subtype                AS subtype,
                     cam.event_type                   AS type,
                     cam.expiration_date              AS expirationdate,
                     cam.home_market_code             AS homemarketcode,
                     cam.home_market_country          AS homemarketcountry,
                     cam.instrument_name              AS instrumentname,
                     cam.security_type                AS securitytype,
                     cam.isin                         AS isin,
                     cam.record_date                  AS recorddate,
                     cam.related_event_id             AS relatedeventid,
                     cam.ticker                       AS ticker,
                     cam.sedol                        AS sedol,
                     cam.sedols                       AS sedols,
                     cam.offered_company_id,
                     cam.offering_company_id
                 FROM
                     ca.cam left
                     JOIN ca.dividend ON cam.event_id = dividend.event_id
                                         AND dividend.status IS NOT NULL
                     LEFT JOIN ca.markets ON markets.event_id = cam.event_id
                 WHERE
                     cam.event_id > 0
                     AND cam.event_status !=:1


I see no reference to columns from dividend or markets. So perhaps try the row_number reduction on CAM only and *then* join to the these tables to carry less into the sorting

improve query to fetch first 1 rows only faster

Dan, October 18, 2024 - 3:49 am UTC

Thanks for your help to the Oracle community.

I have a larger composite partitioned table, MKTH.F_MKT_PRC, Its partitioned by ( Range + Hash) on PRC_DT and LST_ID.

MKTH.F_MKT_PRC_PK is Primary key Local partitioned index on (LST_ID, SRC_ID, SNP_ID, PRC_DT, REC_DT_START) columns.

Application has requirement to get just 1 row only for the input partition key column (PRC_DT) along with other filters.

I tried with PARALLEL hint to speed up the search, but it does lot of I/O almost 100GB to get 1 row as output and took about 6 secs.

1. Any suggestions to speed up this query ?

Note that, tried with /*+ FIRS_ROWS(1) */ Hint along with PARALLEL,, but it did not improve the query performance.

2. Will it help to make it faster if i create another local index on (SRC_ID, PRC_DT, REC_DT_START, LST_ID) <-- put all interested columns only in the index ?

Thanks for your help and suggestions!

Query:
=====
SQL> SELECT /*+ GATHER_PLAN_STATISTICS PARALLEL(32) */
     LST_ID from MKTH.F_MKT_PRC WHERE SRC_ID = 2789 AND PRC_DT = TO_DATE('2024-08-31', 'YYYY-MM-DD') AND REC_DT_START >= TIMESTAMP '2024-10-17 14:51:02.472' AND REC_DT_START <= TIMESTAMP '2024-10-17 14:52:55.0'
     FETCH FIRST 1 ROWS ONLY
/
 490790649

SQL> select * from table(dbms_xplan.display_cursor(null,null, format => 'ALLSTATS LAST +cost +bytes partition'));
SQL_ID  d28d6xh2vz886, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS PARALLEL(32) */
LST_ID from
MKTH.F_MKT_PRC WHERE SRC_ID = 2789 AND PRC_DT =
TO_DATE('2024-08-31', 'YYYY-MM-DD') AND REC_DT_START >=
TIMESTAMP '2024-10-17 14:51:02.472' AND REC_DT_START <=
TIMESTAMP '2024-10-17 14:52:55.0' FETCH FIRST 1 ROWS ONLY

Plan hash value: 2861602162

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                 | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                      |      1 |        |       |  3348 (100)|       |       |      1 |00:00:06.13 |    1516 |     |       |          |
|*  1 |  VIEW                              |                      |      1 |      1 |    26 |  3348  (16)|       |       |      1 |00:00:06.13 |    1516 |     |       |          |
|*  2 |   WINDOW NOSORT STOPKEY            |                      |      1 |      1 |    30 |  3348  (16)|       |       |      1 |00:00:06.13 |    1516 | 73728 | 73728 |          |
|   3 |    PX COORDINATOR                  |                      |      1 |        |       |            |       |       |      1 |00:00:06.13 |    1516 | 73728 | 73728 |          |
|   4 |     PX SEND QC (RANDOM)            | :TQ10000             |      0 |      1 |    30 |  3348  (16)|       |       |      0 |00:00:00.01 |       0 |       |       |          |
|*  5 |      WINDOW NOSORT STOPKEY         |                      |      0 |      1 |    30 |  3348  (16)|       |       |      0 |00:00:00.01 |       0 | 73728 | 73728 |          |
|   6 |       PX BLOCK ITERATOR            |                      |      0 |      1 |    30 |  3348  (16)|     1 |    64 |      0 |00:00:00.01 |       0 |       |       |          |
|*  7 |        INDEX STORAGE FAST FULL SCAN| F_MKT_PRC_PK    |      0 |      1 |    30 |  3348  (16)|  3649 |  3712 |      0 |00:00:00.01 |       0 |  1028K|  1028K| 9286K (0)|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY NULL)<=1)
   5 - filter(ROW_NUMBER() OVER ( ORDER BY NULL)<=1)
   7 - storage(:Z>=:Z AND :Z<=:Z AND ("REC_DT_START">=TIMESTAMP' 2024-10-17 14:51:02.472000000' AND "SRC_ID"=2789 AND "PRC_DT"=TO_DATE(' 2024-08-31 00:00:00',
              'syyyy-mm-dd hh24:mi:ss') AND "REC_DT_START"<=TIMESTAMP' 2024-10-17 14:52:55.000000000'))
       filter(("REC_DT_START">=TIMESTAMP' 2024-10-17 14:51:02.472000000' AND "SRC_ID"=2789 AND "PRC_DT"=TO_DATE(' 2024-08-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "REC_DT_START"<=TIMESTAMP' 2024-10-17 14:52:55.000000000'))

Note
-----
   - Degree of Parallelism is 32 because of hint

Chris Saxon
October 21, 2024 - 1:11 pm UTC

The query is doing an index-only scan. The problem is LST_ID is the first column in this index.

This column only appears in the SELECT clause, not the WHERE clause. This means the database is full scanning the index, because it searches the index columns left-to-right.

This means the values for SRC_ID = 2789 are spread throughout the index. If this column is first they would be grouped in one place.

Will it help to make it faster if i create another local index on (SRC_ID, PRC_DT, REC_DT_START, LST_ID)


This is worth a try as you're checking SRC_ID & PRC_DT for equality and REC_DT_START is in a range. I'd also try without LST_ID at the end. The query fetches one row, so should only need one I/O to get the row.

Thank you

Dan, October 28, 2024 - 4:53 am UTC

Hi Chris
New index really helped to speed up the query. Thanks
Connor McDonald
October 28, 2024 - 6:57 am UTC

Glad we could help