optimizer_index_cost_adj
A reader, September 14, 2003 - 12:04 am UTC
Tom,
I have not seen Metalink make any specific warnings about any other parameter in particular like they did for "OPTIMIZER_INDEX_COST_ADJ".
Is there a particular range at which the value for "OPTIMIZER_INDEX_COST_ADJ" becomes unrealistic for an OLTP application/database?
Under what conditions should the default value of "OPTIMIZER_INDEX_COST_ADJ" be changed from 100 ?
For the query I am looking at altering this value at the session level to 50 showed significant performance improvement, but the documentation on Metalink is making me think twice. That's the reason I wanted your expert advice.
Thank you
September 14, 2003 - 9:52 am UTC
if you get a copy of my new book -- i go into this in much more detail.
however, this is just a parameter, just a setting. there are no unreasonable values -- just the values that work best for you. 50 is a good one, 25 is stronger, 100 the default. somewhere between 0 and 100 is the number for you.
the note only states the obvious "a wrong setting for parameter X can be bad"
"Proper" value of OPTIMIZER_INDEX_COST_ADJ....
Mark J. Bobak, September 14, 2003 - 2:37 am UTC
I have read elsewhere (original source was a Tim Gorman paper,
I believe), that the purpose of the OPTIMIZER_INDEX_COST_ADJ
is to allow the CBO to understand the relative cost of
a db file sequential read vs. a db file scattered read.
As such, the recommendation I've read is to set your
OPTIMIZER_INDEX_COST_ADJ =
select round((select average_wait
from v$system_event
where event='db file sequential read')
/
(select average_wait
from v$system_event
where event='db file scattered read')
* 100)
from dual;
Tom,
I'm not sure I've ever heard of Oracle expressing this,
though. Have you seen/heard this before? Do you agree?
Thanks,
-Mark
September 14, 2003 - 9:55 am UTC
that is just using the ratio of waits for indexed reads/full scan reads. it is a valid "start point". but it'll have to be iteratively set as the very act of setting it will change the ratio as you'll start doing more index reads (most likely)
Seldom results
Michael, November 12, 2003 - 2:45 am UTC
Hello Tom!
I try to find out "good" values for the optimizer_index_cost_adj and optimizer_index_caching parameters.
For the cost_adj parameter i used the following query to find out a starting point (as you recommended):
SQL> l
1 select round((select average_wait
2 from v$system_event
3 where event='db file sequential read')
4 /
5 (select average_wait
6 from v$system_event
7 where event='db file scattered read')
8 * 100) as starting_point
9* from dual
When i execute this statement on database 1 i get:
SQL> /
STARTING_POINT
--------------
7558
I was a little bit stunned by this value. As far as i know, this would mean that an index access is much more "expensive" than an full table access.
Here are all the columns for the two events form the v$system_event table:
SQL> select * from v$system_event
2 where event in ('db file sequential read', 'db file scattered read');
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
-------------------------- ----------- -------------- ----------- ------------
db file sequential read 92814138 0 3787666073 40,8091499
db file scattered read 4349122 0 2348427 ,539977264
Then i inspected database 2. Here, i found the other extreme:
SQL> select round((select average_wait
2 from v$system_event
3 where event='db file sequential read')
4 /
5 (select average_wait
6 from v$system_event
7 where event='db file scattered read')
8 * 100) as starting_point
9 from dual;
STARTING_POINT
--------------
0
0!!! Then i looked into the v$system_event table (i thought, perhaps timed_statistics is disabled), but:
SQL> select * from v$system_event
2 where event in ('db file sequential read', 'db file scattered read');
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
---------------------------- ----------- -------------- ----------- ------------
db file sequential read 119504401 0 7598589 ,063584177
db file scattered read 368384847 0 1,1029E+10 29,9391977
So the ratio is correct (0.063584177 / 29.9391977 = .0021).
Are these values from the two databases realistic? How are these extreme ratios possible? (I suppose i did something wrong.)
And: What should i do now with the optimizer_index_cost_adj parameter on these databases?
November 12, 2003 - 10:28 am UTC
well, don't put words in my mouth -- they recommended it, i'm not endorsing it.
looks like that database has been up a while (hope so!) -- aggregate numbers over a long period are "meaningless".
try cost_adj = 10, or 25, or 90, or 100.
but wait -- first -- have you IDENTIFIED A PERFORMANCE PROBLEM that you are actually trying to solve and have you (most important -->) identified the root cause!
don't change anything until you do.
Reset of v$system_event possible?
Michael, November 13, 2003 - 2:56 am UTC
Yes, database 2 is up since mid of august and database 1 is up since a week.
Is there a way to "reset" the v$system_event table (without a restart of the database) to get the data of, for example, a single hour?
November 13, 2003 - 6:57 am UTC
you would
create table x as select * from v$system_event;
wait a bit.......
create table y as select * from v$system_event;
now, join X to Y and diff the values -- that'll be the IO's and such during that "wait a bit" period.
Thanks!
Michael, November 13, 2003 - 9:39 am UTC
Hints Vs Index Paramateres
Mike, February 09, 2004 - 11:28 am UTC
Tom,
We are moving to 9i from 8i...and also moving from rule to cost based.
With just analyzing tables and having optimizer changed to "CHOOSE" we are seeing a lot of performance gains in 9i database..in comparison to 8i (rule based)...I mean some of the stuff that used to drag...is flying now...
On the other hand some queries are dragging a bit ..they are not picking indexes that they were using while in 8i..
Question: Is it a good idea to play with these index optimizing parameters (which will effect the whole database) or just start adding hints to the queries that are slow...to use particular indexes
Thanks.
February 09, 2004 - 12:08 pm UTC
I prefer things that give the optimizer information such as
o optimizer_index_cost_adj
o optimizer_index_caching
o good hints, like FIRST_ROWS <<--- might be all you need.
optimizer_index_cost_adj
R. Suchak, November 12, 2004 - 1:01 pm UTC
Hi Tom,
I am probably one of your biggest fans and I really appreciate all your hard work to teach Oracle. I have both your books and as a sign of goodwill for all your amazing work, I have been recommending your book to everyone....kids in kindergarten to senior citizens learning how to use the internet (ok....i know i sound crazy but you get the point).
On page 310 of your "Effective Oracle by Design", in the first paragraph of the "Set Optimizer_Index_Cost_Adj" section, towards the end of the section you say, "Setting this value to 50 causes the optimizer to consider asingle-block table access as half as expensive as multiblock I/O would be, effectively cutting in half the cost of table access (50/100). \n
The table access part is an error right? It should be "......effectively cutting by half the cost of index access (50/100)", right? \n
I checked the reference manual and Tim Gorman's (SageLogix) "The Search for Intelligent Life in the CBO" paper that you recommended and both of them seem to say the contrary.
Please excuse me if I am mistaken.
November 13, 2004 - 9:39 am UTC
Well, I consulted with Jonathan Lewis on this and he had this to say on the topic:
It is not easy to describe the effect of the optimizer_index_cost_adj
in a concise sentence, because it applies to a very precise part of
the execution plan.
In effect, it is used to scale the cost of the lines in a plan that
are the result of an indexed access path. The cost shown on
such lines is Oracle's estimate of number of index blocks to be
visited plus the number of table blocks to be visited. The scaling
factor is (optimizer_index_cost_adj / 100).
(faked) example:
select statement cost = 20
table T access by rowid cost = 20
index range scan on I cost = 4 card = 16
In this case, Oracle costs for 4 index block visits
plus 16 table block visits, for a total of 20.
Set optimizer_index_cost_adj to 50, and you will see:
select statement cost = 10
table T access by rowid cost = 10
index range scan on I cost = 4 card = 16
(NOTE - the cost on the index line does not change,
but its contribution to the table line IS halved).
Since this parameter has most effect on systems that
have poor indexes and consequently do large range scans,
it is easy to be fooled into thinking that it is only the
table access component that changes since most of the
cost contribution of large range scans comes from
the table and not from the index. (In fact I thought
that this was the case when I first started to test 9i,
and got slightly different answers from 8i).
Regards
Jonathan Lewis
</code>
http://www.jlcomp.demon.co.uk/faq/ind_faq.html The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/seminar.html <code>
Optimising Oracle Seminar - schedule updated Sept 19th
8i/9i(9205)
Shivaswamy, March 13, 2005 - 12:37 pm UTC
Tom,
One of the queries from a package used to "hang".. This query won't complete even after 24 hours. Whole package used to complete in 5 hours in 8i days. I added Histograms on indexed columns and for couple of runs of this monthly report seemed to work OK. Now, again it "hangs". It keeps doing the index scan but won't even show up as "db file sequential read" against the session. Finally, if I "simulate 817 optimizer" by setting optimizer compitability or set of session modifyable parameters to do the same, the query completes in about 20 minutes.
OK I thought of changing my optimizer_index_caching & optimizer_index_cost_adj 20 - 80 to 80 - 20 and the query completes in about the same 20 minutes. Now, the question is what should I do? I have umpteen other reports running in my database that may get affected. Can I not have setting which suits all?
Any input? I can give the query and the autotrace output, if that helps.
Thanks in advance.
March 13, 2005 - 12:57 pm UTC
so, have you isolated the query
have you looked at the autotrace traceonly explains from 8i and 9i
have you identified the differences
have you verified the cardinalities are even close
(it would appear that if gathering stats "fixed" it for a while, that perhaps your stats are stale once more..)
shivaswamy, March 13, 2005 - 2:45 pm UTC
Yes Tom. I have isolated the query. I have recreated the histograms - and so stats are not stale.(For all indexed columns size auto )In 9i since it runs for ever, I have only the plan from v$sql_plan. I had set the trace of the session with set events '10046 trace name context forever, level 12'. But I had to kill the session. I have the plans from three senerios here:
9i without adjusting - from v$sql_plan
OPERATION OBJECT COST CARDINALITY KBYTES
-------------------------------------------------- -------------------------- ----------- ----------
INSERT STATEMENT CHOOSE Cost=59918 59,918
SORT GROUP BY 59,918 3,874,457 586,466
HASH JOIN 15,359 3,874,457 586,466
INDEX FAST FULL SCAN CLTCOF1_IDX3 1 3,403 47
TABLE ACCESS BY INDEX ROWID IPP020 15,217 1 0
NESTED LOOPS 15,217 1,648,235 226,954
HASH JOIN 842 28,960,544 3,026,151
INDEX FAST FULL SCAN IPP566_IDX1 268 146,755 7,596
TABLE ACCESS FULL EAPACTIVITY 158 369,143 19,467
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN IPP020_IDX1 1
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN IPP020_IDX2 1
9i with
optimizer_index_caching =20
optimizer_index_cost_adj =80
From autotrace
784630 rows selected.
Elapsed: 00:13:837.04
Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=CHOOSE (Cost=21417 Card=104 Bytes=16120)
1 0 SORT (GROUP BY) (Cost=21417 Card=104 Bytes=16120)
2 1 HASH JOIN (Cost=21413 Card=104 Bytes=16120)
3 2 HASH JOIN (Cost=463 Card=1832 Bytes=221672)
4 3 HASH JOIN (Cost=174 Card=4047 Bytes=275196)
5 4 INDEX (FAST FULL SCAN) OF 'CLTCOF1_IDX3' (NON-UNIQUE) (Cost=1 Card=3403 Bytes=47642)
6 4 TABLE ACCESS (FULL) OF 'EAPACTIVITY' (Cost=158 Card=373313 Bytes=20158902)
7 3 INDEX (FAST FULL SCAN) OF 'IPP566_IDX1' (NON-UNIQUE) (Cost=268 Card=146755 Bytes=7778015)
8 2 TABLE ACCESS (FULL) OF 'IPP020' (Cost=19857 Card=16516287 Bytes=561553758)
Statistics
----------------------------------------------------------
0 recursive calls
21 db block gets
332906 consistent gets
472818 physical reads
0 redo size
40150276 bytes sent via SQL*Net to client
30042 bytes received via SQL*Net from client
786 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
784630 rows processed
8i with Autotrace:
784630 rows selected.
==> It took about 17 Min. with an array size 100
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23118 Card=104 Bytes=24440)
1 0 SORT (GROUP BY) (Cost=23118 Card=104 Bytes=24440)
2 1 HASH JOIN (Cost=23114 Card=104 Bytes=24440)
3 2 TABLE ACCESS (FULL) OF 'IPP566' (Cost=327 Card=146755 Bytes=14088480)
4 2 HASH JOIN (Cost=22257 Card=180931 Bytes=25149409)
5 4 HASH JOIN (Cost=184 Card=4047 Bytes=408747)
6 5 TABLE ACCESS (FULL) OF 'CLTCOF1' (Cost=4 Card=3403 Bytes=47642)
7 5 TABLE ACCESS (FULL) OF 'EAPACTIVITY' (Cost=158 Card=373313 Bytes=32478231)
8 4 TABLE ACCESS (FULL) OF 'IPP020' (Cost=19857 Card=16516287 Bytes=627618906)
Statistics
----------------------------------------------------------
0 recursive calls
15 db block gets
333924 consistent gets
339181 physical reads
960 redo size
39127424 bytes sent via SQL*Net to client
290552 bytes received via SQL*Net from client
7848 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
784630 rows processed
I value your observations, very much.
A reader, March 13, 2005 - 7:05 pm UTC
We had a similar issue with 9i, which was overcome by setting the undocumented parameter _btree_bitmap_plans to false
Regi
My thoughts
Venkat, March 14, 2005 - 12:18 am UTC
Hello all..,
I am am **BIG** fan of Tom , as lot of other readers here. just wanted to post my thoughts here..
I came across a similar situation where one of our applications were migrating from 8.0 to 10g straight (yes..no code changed happened in the past 6 years and this change involves Rbo->CBO as well)..
And ,even though I am not intelligent as Oracele gurus here I am sinceere follower of tom..and I would first ask questions to myself as .."did I autotrace it? did I benchmark it before I blame it on oracle to call it a bug? "
and here is small piece of info..
consider this query:
====================
select * from table1 t1, table2 t2,tabbe3 t3......table16 t16
where
<<join conditions involving tables 1..6>>
and table7.rowid =(select max(rowid) from table7 where
keyid = t6.keyid and keyid = t5.keyid)
and <<join considtions involving tables 8..16>>
this query worked fine in RBO and gave the results in <1 minute but CBO took more than 1 hour..
I verified all indexes..all table stat and column stat and dbms_stats procedures(to genererate histograms even)..played with optimizer_index_cost_adj parameter and index_caching parameters but for no effect (means it didnt follow any theory of duplication to fix to a optimum value)
then I gnerated the 10053 report and analyzed it..realised cbo was pushing rowid condition (on table 7) to extreme final step and hence mergin two sets (table 1...table 6) merge join cartesian (table 8..table 16)
added a simple/dummy join condition to make the optimizer believe all table's primary keys are used like
select * from table1 t1, table2 t2,tabbe3 t3......table16 t16
where
<<join conditions involving tables 1..6>>
and table7.keyid=t5.keyid --duplicated
and table7.rowid =(select max(rowid) from table7 where
keyid = t6.keyid and keyid = t5.keyid)
and <<join considtions involving tables 8..16>>
and boom..CBO completed the task iwithin 10 seconds (RBO used to take 1 minute)
so lesson here is ..dont blindly tune any init parameters (including index_cost_adj ,index_caching ,all undocumented _<<blah blah parameters>> and expect some magic to happen.Understand CBO..its lots fun when its under your control (than under some metalink analyst's control)
Good luck everyone
Thank you, but..
Shivaswamy, March 14, 2005 - 7:07 am UTC
Venkat,
Thanks for your post. I understand there needs to be reason to change something. But you have tried everything and in hands up situation. ANd there are certain parameters which you need to "try out". And when you are doing that, if you don't understand some behavior, I don't know anybody better than Tom (already I had discounted TAR on metalink) to approach to discuss.
more thoughts to Venkat's...
denni50, March 14, 2005 - 10:18 am UTC
We upgraded our prod and test db's from 8i to 9iR2..that also involved RBO conversion to CBO. Everything was performed on the test db before implementing on production.
Although we do not have queries like Venkat's that joins sixteen tables...I did notice when using histograms the system came to a screeching halt..once I removed the histograms and gathered stats on tables and indexes only the CBO was free to do it's magic...so yes..you need to understand what your changing and what effects those changes have on performance. That's why we have a test db and I have my own personal db that I use for catastrophic scenarios.
There's an excellent white paper on CBO here that Tom has posted a link to.That paper really gave me insight to the workings of CBO along with reading Tom's books and asking questions here and reading questions/answers posted by others.
CBO is not that complex to understand once you take the time to read and experiment.
We have since implemented change to Production, which utilizes canned software, with improved success as expressed by users and mgt.
Siebel
VA, June 22, 2005 - 11:39 am UTC
We are having very poor performance using Siebel 7.7 with Oracle 9iR2 under CBO.
I ran a
exec dbms_stats.gather_schema_stats('siebel',cascade=>true)
Here are the init.ora parameters they recommend
######################################################################
# Following parameters are from Siebel Technical Note 582
hash_join_enabled = true
optimizer_dynamic_sampling = 1
optimizer_features_enable = 9.2.0
optimizer_mode = choose
partition_view_enabled = false
query_rewrite_enabled = false
query_rewrite_integrity = enforced
star_transformation_enabled = false
optimizer_max_permutations = 100
optimizer_index_cost_adj = 1
optimizer_index_caching = 0
db_file_multiblock_read_count = 32
Also, the Siebel app server does a alter session set optimizer_goal=first_rows for all the sessions in its connection pool.
All this is fine, since its a interactive application, they are optimizing for fast response time.
But...performance sucks! In the earlier Siebel version (7.5) on Oracle 8i/RBO, things were very snappy. With CBO, everything is all round sluggish. Sometimes orders of magnitude slow!
Needless to say, all the queries are canned, we cant change anything. We can get a trace file containing the SQL (with bind variable values) and timing for each SQL statement for each screen.
Questions:
1. How do those parameters above look?
2. Doesnt optimizer_index_cost_adj=1 and optimizer_index_caching=0 look a little "odd"?
Any other ideas?
Thanks
June 23, 2005 - 1:25 pm UTC
this is far too vague to even comment on.
and thanks so much for the colorful language.
most people, when setting the optimizer index cost adjust low would up the optimizer index caching
however, if your problem is "index abuse" (first rows, cost adjust low), this will just make it worse.
so, you need to discover "what the root cause is"
I would suggest however that to tune siebel, you involve siebel, they hold most all of the cards here.
To Siebel
denni50, June 23, 2005 - 3:58 pm UTC
I don't know anything about Siebel, but Oracle is Oracle.
you may want to try changing the optimizer_index_caching to 90 and the optimizer_index_cost_adj to either 20,25,30(50 if
it's a data warehouse or dss).
Try changing these parameters then run some queries to see
if performance improves.
There's an excellent white paper on CBO on this website
that thoroughly and clearly explains how these 2 parameters
should be set. You can do an Alter Session Set then run
an explain plan.
hth
June 23, 2005 - 7:11 pm UTC
no, disagree
with 3rd party apps, you HAVE to work in "their box".
it is their application, they should support your efforts. and they may not support the willy nilly setting of parameters. At all.
Oracle is Oracle when you own the application. Else, Oracle is just the black box behind the application and the application vendor must be involved.
VA, June 23, 2005 - 4:06 pm UTC
"most people, when setting the optimizer index cost adjust low would up the optimizer index caching"
Right, thats what I thought. So I wanted to know what you thought of cost_adj=1 and caching=0! What would these settings do? Dont they sort of cancel each other out? cost_adj=1 is saying "indexes are cheap, go for them". caching=0 is saying "indexes are going to incur a disk read".
Index abuse is not my problem, I think. Its a interactive web app, so indexes should be used generously to get the first set of rows fast. People dont page beyond 2-3 pages anyway.
We have been working with Siebel Support for months now, we arent getting anywhere. They suggest moving to 10g and seeing if that helps.
[Sorry for the colorful language, I thought s*ks was an acceptable term to use these days]
June 23, 2005 - 7:14 pm UTC
well, I would approach this differently. take a "bad query" -- figure out "why"
is it bad stats?
is the optimizer just getting it wrong?
if I alter session and use varying values of the optimizer parameter - what happens?
would system statistics help?
is the cause a table that was empty (temp table thing) but full at runtime?
silver bullets do not in general exist.
<quote from effective oracle by design>
OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ Wrap-up
The setting of the OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ parameters will not make the plans run faster. It just affects which plan is chosen. It is important to remember that setting these parameter values does not affect how much of the index is actually cached or how expensive a single-block I/O truly is in relation to multiblock I/O. Rather, this allows you to pass this information you have learned onto the CBO so it can make better decisions on your system. Also, it points out why just looking at the cost of a query plan in an attempt to determine which plan is going to be faster is an exercise in futility: Take two identical plans, with two different costs, which one is faster? Neither is.
The effect of adjusting these two parameters is that they have a profound and immediate impact on the CBO. They radically change the costing assigned to various steps. This, in turn, dramatically affects the plans generated. Therefore, you want to test thoroughly the effects of these parameters on your test system first! Ive seen systems go from nonfunctional to blazingly fast simply by adjusting these two knobs. Out of all of the Oracle initialization parameters, these two are most likely to be defaulted inappropriately for your system. Adjusting them is likely to greatly change your opinion of the CBOs abilities.
You should consider setting these to nondefault values for many systems, or at least testing the two extremes:
o The default settings of OPTIMIZER_INDEX_CACHING = 0 and OPTIMIZER_INDEX_COST_ADJ = 100. These are typically appropriate for many data warehouse/reporting systems.
o The settings of OPTIMIZER_INDEX_CACHING = 90 and OPTIMIZER_INDEX_COST_ADJ = 25. These are typically appropriate for many transactional/OLTP systems.
</quote>
A reader, June 23, 2005 - 7:51 pm UTC
Right, I understand all that, I have your book. I have a bad query, I really cannot analyze the query or the plan because most all Siebel queries join at least 10 tables with a handful of outer-joins, the query plans are pages and pages long, its very complex.
All I was trying to understand was the interaction between the 2 parameters, index_cost_adj and index_caching. Even the excerpt from your book you pasted talks about setting them to the 2 extremes.
Again, what is the effect of setting index_cost_adj=1 and index_caching=0? How does it affect CBO's costing of plans?
Thanks
June 23, 2005 - 8:36 pm UTC
but you CAN compare an autotrace of the query (the guessed cardinalities) with the TKPROF reality and see if we are even *close*
short of cutting and pasting the entire chapter on them, any chance you have access to a copy of it? chapter 6?
A reader, June 23, 2005 - 7:54 pm UTC
By 2 extremes, I meant (cost_adj=0-25 and caching=90-100) and (cost_adj=75-100 and caching=0-25) is documented. Nowhere does it mention the effect of setting them *both* on the same side
June 23, 2005 - 8:43 pm UTC
In short:
We can use the OPTIMIZER_INDEX_CACHING parameter to tell Oracle the percentage of index blocks it could expect to find in the buffer cache on average. The default of zero causes Oracle to believe that the cache is devoid of index blocks. A maximum value of 100 causes Oracle to believe that the cache has all of the index blocks.
If you think of the OPTIMIZER_INDEX_CACHING parameter as being used to tell Oracle the percentage of an index that is cached, you can think of the OPTIMIZER_INDEX_COST_ADJ parameter as telling Oracle how much of the table data will be cached. The lower the number, the less costly single-block table accesses become; conversely, the higher this number, the more costly. A way to think of this would be that this number reflects the cost of performing multiblock I/O (associated with full-table scans, for example) versus the cost of performing single-block I/O (associated with index reads). If you leave this parameter at the default setting of 100, these operations are costed the same. Setting this value to 50 causes the optimizer to consider a single-block table access as half as expensive as multiblock I/O would be, effectively cutting in half the cost of table access (50/100).
so, with
optimizer_index_cost_adj = 1
optimizer_index_caching = 0
all index accesses are PHYSICAL IO. But single block IO is pretty cheap.
You reduced the cost of a table access by rowid.
I suppose if you wanted "first rows for sure",
o The settings of OPTIMIZER_INDEX_CACHING = 90 and OPTIMIZER_INDEX_COST_ADJ =
25. These are typically appropriate for many transactional/OLTP systems.
might be something to consider.
Very useful. Thanks a lot.
A reader, June 23, 2005 - 9:00 pm UTC
Tom
denni50, June 24, 2005 - 10:07 am UTC
We are running 3rd party apps on our systems(prod & dev).
The vendor was vehemently opposed to converting to CBO
from RBO. I went ahead and changed it anyhow(on test first)
after researching all the Oracle Tuning and Performance
strategies, for us the change was a successful conversion
with no glitches and a tremendous benefit in performance.
If Siebel is a business application(like what we use)
then I can't see it hurting to try and test different
parameter settings on a test system.
If the apps is built with alot of hints then that's another
issue.
anyhow..I was merely offering a suggestion based on my
experience using 3rd party apps on our system.
June 24, 2005 - 6:22 pm UTC
but it can hurt, you have to be careful. You bought a 3rd party app, not a database in this case.
something goes wrong and the first thing they say will be "we don't support that configuration...."
It has been my experience that playing around without the vendor support is quite dangerous from a supportability perspective.
one last thing...
denni50, June 24, 2005 - 10:18 am UTC
you pretty much told 'Siebel' the very same thing that
you disagreed with in my post???
<you wrote>
o The settings of OPTIMIZER_INDEX_CACHING = 90 and OPTIMIZER_INDEX_COST_ADJ =
25. These are typically appropriate for many transactional/OLTP systems.
might be something to consider
<end>
June 24, 2005 - 6:24 pm UTC
that was cut and pasted from the book, and given as a way of saying "i give up, do what you will"
I do not believe it to be the right approach (i said as much "i would not take this approach....")
OK I see now....
denni50, June 25, 2005 - 10:16 am UTC
Siebel is a db system like SqlServer is SqlServer and
Oracle is Oracle....etc.
The poster was asking about the two Oracle optimizer parameters(they are also used on Siebel?).
I thought Siebel was a business app running on Oracle.
In that case Siebel would need to support any performance
and configuation issues.
In our case when we converted to cbo it was with mgt knowledge and approval and that the conversion would
have to be tested first then disclosed to the vendor that we converted to cbo on the test system, when everything tested ok we wanted to proceed with converting the production db.
The vendor then agreed and said they had other clients that
converted to cbo because they were using function based indexes.
Believe me nothing is done without being tested first and mgt's involvement...that's why I have 2 servers (test, personal).
I believe their opposition in the beginning was because
they didn't want to be bothered, however when we upgraded
to 9iR2 and I told them RBO was being desupported and that
we would need to convert to CBO eventually they offered no
further protest.
June 25, 2005 - 11:37 am UTC
The vendor then agreed
that is the big thing, the vendor must support your configuration. It gets tricky with 3rd party software, you open yourself to all kinds of support issues.
Siebel
A reader, June 28, 2005 - 7:32 pm UTC
"well, I would approach this differently. take a "bad query" -- figure out "why""
I have a "bad query". It joins 55 tables, yes 55 tables. 6 of them are outer-joined. How does one even begin to analyze such a monstrous query? It takes 300 seconds to execute. How can I even determine if the optimizer is getting it wrong? I would need to compare the actual row source numbers from the tkprof with the estimated numbers from the explain plan. For a plan that is literally 10s of screenfulls long, how do I do this?
How can statspack help here? Would that help to pinpoint the low-hanging fruit and attack those queries first?
How can I use after-the-fact views like v$sql_plan, v$sql_plan_statistics, etc to zoom in on bad queries? The latter view always returns 0 rows.
Thanks
June 28, 2005 - 8:45 pm UTC
55 tables in 300 seconds seems pretty darn good just from a parsing perspective :)
quick questions:
how many rows out
does it aggregate or just a straight select.
Siebel
A reader, June 28, 2005 - 8:48 pm UTC
Yes, even I was pretty amazed that it took just a second or so to parse the query. Oh yeah, Siebel had me set optimizer_max_permutations to 100 (the default is much higher), so I guess that explains it.
4 rows from the query
its a straight select, no scalar subqueries, no aggregation, no order by
June 28, 2005 - 8:50 pm UTC
ok, look at plan in tkprof, look for index range scans with LOTS of output rows followed by a table access by index rowid with FEW output rows.
do you have those?
setting that to 100 -- that could be really bad for a query like this -- was that setting just for this query or is that what siebel says to set for all things?
A reader, June 28, 2005 - 8:57 pm UTC
OK I will look at the tkprof for what you suggest.
That is the init.ora setting they recommend! I posted all the Siebel-recommended init.ora settings above.
RBO vs CBO parse time
VA, June 30, 2005 - 12:33 pm UTC
Here is what I am seeing for another problem Siebel query
RBO
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.92 0.89 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 0.01 0 221 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.94 0.91 0 221 0 4
Takes 0.92 seconds to parse the query and 0.02 to execute it for a total of 0.94, 4 rows returned
CBO
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 5.02 4.90 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.01 0.01 0 218 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 5.04 4.91 0 218 0 4
The execution plan is identical, LIOs are almost identical.
CBO just takes longer to parse the 33-table join query!
Yes, Siebel app server does use bind variables, so I guess the first occurence of this query in a session will get hit by the parsing penalty and successive queries will soft parse and be fast [I verified this].
But because of 3-tier architecture, connection pooling, etc, user connections come and go rapidly. So, even if the query is in the shared pool, a different Oracle session might execute it and it will need to be parsed again.
[Increasing optimizer_max_permutations here will only make the problem worse, right?]
Comments? Thanks
June 30, 2005 - 12:41 pm UTC
.... Increasing optimizer_max_permutations here will only make the problem worse,
right? .....
not necessarily, many times there is a finite, small number of realistic join orders and you never get close to the max.
Siebel 7.7
VA, June 30, 2005 - 3:56 pm UTC
I am ready to give up. No matter what we do, we just cannot get satisfactory performance with Siebel 7.7/Oracle 9i CBO. Either the parse or the execute or both take too long. RBO is consistently faster.
Frankly, I am not surprised. Most all of the queries are 30 to 55 table joins, with outer joins, order bys and what not. No wonder the CBO is getting bogged down. RBO just has to apply its 13 simple rules and it is done!
If there any Oracle DBAs out there that support a Siebel 7.7 installation running on Oracle 9iR2, please chime in. How did you get this darn monster to perform satisfactorily?
Thanks
Alberto Dell'Era, June 30, 2005 - 5:16 pm UTC
> user connections come and go rapidly. So, even if the query
> is in the shared pool, a different Oracle session might
> execute it and it will need to be parsed again.
But the only thing that might have to be repeated is strict-sense parsing (converting the stmt to its tree representation, and decorating it with the semantics - to eg understand that "t" in a tree node is that particular table in that particular schema).
But that would have to be done for the RBO and CBO alike! Once semantically analyzed, the PARSE db call would turn into a soft parse, and the plan found in the library cache - I would expect exactly the same elapsed time for a soft parse, independently from whether the CBO or RBO is being used (that may impact the query optimization time, but if the plan is already in the library cache, the query optimization penalty has been already paid!).
Am i missing something ?
If that's true - maybe it's just a problem of the shared pool being too small - those monster plans eating a lot ot RAM, thus filling the library cache, evicting other plans that has to be HARD parsed again, etc.
June 30, 2005 - 5:34 pm UTC
I believe they were showing us a hard parse and making the comment that
... so I guess the first occurence
of this query in a session will get hit by the parsing penalty and successive
queries will soft parse and be fast ....
but a closer read does indicate they might think the other sessions will have this long parse but they won't, they'll be soft as well.
Hard parse
VA, June 30, 2005 - 7:14 pm UTC
Hm, well, the way Siebel works is that each user (say there are hundreds) connects to the database as themselves and issues queries against siebel.<tablename> for which they have grants given via a role.
So, if JOE connects and issues a monstrous query and JANE connects and issues the same query, even though JOE's query is in the library cache, JANE's query will still be hard parsed because the parsing userid is different, right?
June 30, 2005 - 9:01 pm UTC
nope, that'll be a soft parse still.
ops$tkyte@ORA9IR2> alter system flush shared_pool;
System altered.
ops$tkyte@ORA9IR2> @connect scott/tiger
ops$tkyte@ORA9IR2> set termout off
scott@ORA9IR2> set termout on
scott@ORA9IR2> select * from scott.emp where 1=0;
no rows selected
scott@ORA9IR2> @connect /
scott@ORA9IR2> set termout off
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> @mystat parse
ops$tkyte@ORA9IR2> set echo off
NAME VALUE
------------------------------ ----------
parse time cpu 1
parse time elapsed 2
parse count (total) 64
parse count (hard) 1
parse count (failures) 0
ops$tkyte@ORA9IR2> @mystat parse
ops$tkyte@ORA9IR2> set echo off
NAME VALUE
------------------------------ ----------
parse time cpu 1
parse time elapsed 2
parse count (total) 66
parse count (hard) 1 <<<<<======
parse count (failures) 0
ops$tkyte@ORA9IR2> select * from scott.emp where 1=0;
no rows selected
ops$tkyte@ORA9IR2> @mystat parse
ops$tkyte@ORA9IR2> set echo off
NAME VALUE
------------------------------ ----------
parse time cpu 1
parse time elapsed 2
parse count (total) 71
parse count (hard) 1 <<<<<========
parse count (failures) 0
Doubt on OPTIMIZER_INDEX_COST_ADJ
Arindam Mukherjee, September 15, 2006 - 5:38 am UTC
Respected Mr. Tom,
The following lines are excerpted from your book -"Effective Oracle by Design".
"OPTIMIZER_INDEX_CACHING parameter as being used to tell
Oracle the percentage of an index that is cached, you can think of the OPTIMIZER_INDEX_COST_ADJ parameter as telling Oracle how much of the table data will be cached. The lower the number, the less costly single-block table
accesses become; conversely, the higher this number, the more costly. "
I could not understand that relationship - lower the number and less costly single-table accesses. Please explain in details.
Since I could not get the essence of "optimizer_index_cost_adj" , may I get the same effect if I use /*+ INDEX */ hints without setting that parameter ?
September 15, 2006 - 6:59 am UTC
did you read the rest of the section whereby I did examples of each and showed the effect of changing one over the other?
NO, you may not get the same effect as using the index hint.
the index hint (which you should use in extremely RARE conditions when nothing but nothing but nothing else works) tells the optimizer "tho shall use this index"
the optimizer_index_* settings change the costing method, give the optimizer more information so it can make the best choice on your system with your data.
A hint is telling it what to do.
The optimizer_index_* settings give it more information to do its job properly.
"It" being the optimizer.
Sorry!! Once Again
Arindam Mukherjee, September 15, 2006 - 9:53 am UTC
Sir,
Thanks for your response and giving me clear distinction between hints and parameter settings. I read your book and specially that parameter set optimizer_index_cost_adj five times.
The gist is if I lower the value of optimizer_index_cost_adj below 100 that means I inform the CBO cost of index scan is lower over full-table scans.
Lets assume that optimizer_index_cost_adj = 100, CBO chooses FULL_TABLE scan.
Alter session optimizer_index_cost_adj = 25;
At this point, may CBO choose again FULL_TABLE scan? If yes, thats okay,
If NO, so indirectly I persuade CBO to go for Index scan rather than directly using /*+ INDEX */ hints.
At this point is there any difference between lower value setting event and /*+ INDEX */ hints use?
September 15, 2006 - 10:30 am UTC
<quote from effective Oracle by design>
If you think of the OPTIMIZER_INDEX_CACHING parameter as being used to tell Oracle the percentage of an index that is cached, you can think of the OPTIMIZER_INDEX_COST_ADJ parameter as telling Oracle how much of the table data will be cached. The lower the number, the less costly single-block table accesses become; conversely, the higher this number, the more costly. A way to think of this would be that this number reflects the cost of performing multiblock I/O (associated with full-table scans, for example) versus the cost of performing single-block I/O (associated with index reads). If you leave this parameter at the default setting of 100, these operations are costed the same. Setting this value to 50 causes the optimizer to consider a single-block table access as half as expensive as multiblock I/O would be, effectively cutting in half the cost of table access (50/100).
</quote>
it does not say "index scan is lower than full table scan" - it costs a single block table access lower, but single block accesses will be eventually MULTIPLIED by number of block accesses - so it will not make index scans *universally* less than full table (multiblock) scans.
and yes, the optimizer may still choose a full table scan at any point, with any setting of optimizer index cost adj.
If your goal is to FORCE the use of a particular index (not typically a WORTHY goal bear in mind) then the index hint (well formed and verbosely specified) would be the technique.
If your goal is to give the optimizer more information about your particular system - you will use system statistics and/or optimizer_index_* settings.
Yes, there is always (at every point) by definition a difference between giving the optimizer information to do its job (optimizer_index_cost_*) and using hints that tell it precisely what to do. That'll never change.
Completely Satisfied
Arindam Mukherjee, September 15, 2006 - 10:45 am UTC
Respected Sir,
Now I am fully satisfied with your answer. Since I know less, before asking you, I usually search google.com if my question seems silly.
From the site "</code>
http://www.dba-oracle.com/oracle_tips_cost_adj.htm, <code>
I have got that part If you do not like the propensity of the CBO "choose" optimizer_mode parameter to favor full-table scans, you can lower the value of optimizer_index_cost_adj to 20, thereby telling the CBO to give a lower cost to index scans over full-table scans.
Thank you for clearing my one doubts.
gather_schema_stats Vs gather_database_stats
Zahir M, October 12, 2006 - 10:17 am UTC
What is the difference between gather_schema_stats and gather_database_stats ?
In which scenarios , would these methods be used one over the other ?
Also , if a database / optimizer is made aware of CPU & IO statistics via these methods ,
Is OPTIMIZER_INDEX_COST_ADJ and optimizer_index_caching still relevant for a CBO ?
October 12, 2006 - 11:43 am UTC
well, obviously one does a schema, the other does more than a schema.
I would not use database, I would suggest schema, if I even used schema instead of "gather table/index". As each schema probably has it's own requirements for frequency of gathering and level of detail to be gathered.
and yes, those parameters still have effect no matter what else is in place.
gather_system_stats
Zahir M, October 13, 2006 - 11:42 am UTC
Thank you for your response .
It was a typo.
I meant to ask the difference between gather_system_stats and gather_database_stats .
In which scenarios , would these methods be used one over the other ?
October 13, 2006 - 2:39 pm UTC
definitely, did you read the descriptions of each in the documentation????
system stats are measures of single block IO speed, multiblock IO speed, cpu speed, true multiblock read counts...
database stats gathers statistics on objects in the database, they are entirely 100% different.
Question Rephrased
Zahir M, October 13, 2006 - 3:48 pm UTC
Yes . I did read the documentation.
Probably , I should you have rephrased my question .
1. Does these methods complement each other ?
2. How often do I need to get gather_system_stats
( for every typical workload) ?
My understanding is that , for a CBO to work efficiently ,
we need to do both gather_system_stats and ( gather_schema_stats or gather_database_stats).
October 13, 2006 - 7:20 pm UTC
1) they are somewhat complimentary, yet, gather system stats gathers statistics about your physical hardware. gather database stats gathers statistics about the objects in your database, the optimizer uses both.
2) not often, not unless you change your workload/hardware.
If you have access to my book Effective Oracle by Design, the chapter on the CBO covers these topics in some detail.
optimizer_index_cost_adj = 1 and optimizer_index_caching = 0
Matthias Schulz from Nuremberg, Germany, December 02, 2006 - 4:10 am UTC
Hello Tom,
our DBA's are running all of our (very large) OLTP databases with optimizer_index_cost_adj = 1 and optimizer_index_caching = 0, wich means, if I got you right: "all index accesses are PHYSICAL IO".
The databases are connected to a large SAN with fast caching.
Here is an analysis I made on 3 of our databases:
-------------------------------------------------
SELECT a.average_wait AS "Full Scan Read I/O-Waits",
b.average_wait AS "Index Read I/O-Waits",
a.total_waits / (a.total_waits + b.total_waits)
* 100 AS "Full Scan-I/O-Waits %",
b.total_waits / (a.total_waits + b.total_waits)
* 100 AS "Index Scan-I/O-Waits %",
(b.average_wait / a.average_wait)
* 100 AS "=> optimizer_index_cost_adj"
FROM v$system_event a, v$system_event b
WHERE a.event = 'db file scattered read'
AND b.event = 'db file sequential read';
DB-1 (8i):
----------
Full Scan Read I/O-Waits 0,646320575898827
Index Read I/O-Waits 0,195235472994047
Full Scan-I/O-Waits % 2,62520646863152
Index Scan-I/O-Waits % 97,3747935313685
=> optimizer_index_cost_adj 30,2072191841542
DB-2 (10g):
-----------
Full Scan Read I/O-Waits 0,43
Index Read I/O-Waits 0,12
Full Scan-I/O-Waits % 0,479834418346362
Index Scan-I/O-Waits % 99,5201655816536
=> optimizer_index_cost_adj 27,906976744186
DB-3 (9i):
----------
Full Scan Read I/O-Waits 1
Index Read I/O-Waits 1
Full Scan-I/O-Waits % 2,08710193113856
Index Scan-I/O-Waits % 97,9128980688614
=> optimizer_index_cost_adj 100
-------------------------------------------------
Based on what you write in your book (OLTP: optimizer_index_cost_adj = 30 and optimizer_index_caching = 90) and the results above for DB-1 and DB-2, I have proposed to got to optimizer_index_cost_adj = 30 and optimizer_index_caching = 50.
Our DBA's say that I should prove that my proposal is better than the actual setting.
So, Tom, could you please tell me what you would recommend here and why.
(I need to convice our DBA's to change the values our myself that their values are better :-) ).
Best regards,
Matthias Schulz
December 02, 2006 - 1:12 pm UTC
I'd say to the DBA's
PROVE THAT YOUR SETTINGS ARE BETTER THAN THE DEFAULT.
I'd just use the defaults. I'd be interested in hearing what their logical scientific reasoning was... they might have one, then again - they might not have any idea why they've done it.
sandro, April 24, 2007 - 7:29 am UTC
Why if I run
ELECT a.average_wait AS "Full Scan Read I/O-Waits",
b.average_wait AS "Index Read I/O-Waits",
a.total_waits / (a.total_waits + b.total_waits)
* 100 AS "Full Scan-I/O-Waits %",
b.total_waits / (a.total_waits + b.total_waits)
* 100 AS "Index Scan-I/O-Waits %",
(b.average_wait / a.average_wait)
* 100 AS "=> optimizer_index_cost_adj"
FROM v$system_event a, v$system_event b
WHERE a.event = 'db file scattered read'
AND b.event = 'db file sequential read'
I obtain this error?
ORA-01476: divisor is equal to zero
this is values in v$system_event
select *
from v$system_event a
,v$system_event b
where a.event = 'db file scattered read'
and b.event = 'db file sequential read'
;
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO
---------------------------------------------------------------- ----------- -------------- ----------- ------------ ----------------- ---------------------------------------------------------------- ----------- -------------- ----------- ------------ -----------------
db file scattered read 1727792 0 674953 0 6749528064 db file sequential read 4270698 0 1974872 0 19748717568
April 24, 2007 - 11:26 am UTC
because one of your divisors is equal to zero?
look at your average wait.... for example....
ops$tkyte%ORA10GR2> create table t
2 as
3 select
4 EVENT, TOTAL_WAITS, TOTAL_TIMEOUTS, TIME_WAITED, AVERAGE_WAIT, TIME_WAITED_MICRO
5 from v$system_event where 1=0;
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t values ( 'db file scattered read', 1727792, 0, 674953, 0, 6749528064 );
1 row created.
ops$tkyte%ORA10GR2> insert into t values ( 'db file sequential read', 4270698, 0, 1974872, 0, 19748717568 );
1 row created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> SELECT a.average_wait AS "Full Scan Read I/O-Waits",
2 b.average_wait AS "Index Read I/O-Waits",
3 a.total_waits / (a.total_waits + b.total_waits)
4 * 100 AS "Full Scan-I/O-Waits %",
5 b.total_waits / (a.total_waits + b.total_waits)
6 * 100 AS "Index Scan-I/O-Waits %",
7 (b.average_wait / a.average_wait)
8 * 100 AS "=> optimizer_index_cost_adj"
9 FROM t a, t b
10 WHERE a.event = 'db file scattered read'
11 AND b.event = 'db file sequential read'
12 /
(b.average_wait / a.average_wait)
*
ERROR at line 7:
ORA-01476: divisor is equal to zero
use this instead:
decode( divisor, 0, to_number(null), numerand/divisor )
optimizer_index_cost_adj in optimizer formula
AMIR RIAZ, September 29, 2007 - 6:26 am UTC
Hi Tom
I often ask you more conceptual questions here is one more. A few post back in some other thread Jonathan Lewis gave me some formula's on how optimizer establish a guess about the cost. for example the cost of index scan is
cost =
blevel +
ceil(selectivity * leaf_blocks) +
ceil(selectivity * clustering_factor)
and the cost of full table scan is :
Cost = High Water Mark / adjusted db_file_multiblock_read_count.
These formula really help me in understanding two basic aspect of oracle optimizer but i still not able to figure out
how oracle uses optimizer_index_cost_adj and optimizer_index_caching in these formulas.
I have your book effect oracle by design and i know their definations and impact but i want to know how optimizer uses them in mathematical formula like above.
regards
Amir Riaz
October 03, 2007 - 1:50 pm UTC
don't worry about the math - it changes each release. It is interesting but not relevant day to day.
You only need to understand the IMPACT of setting these (and that you would not want to set them after 9iR2 with system statistics - just leave them along, default)
Great
Dan, January 11, 2008 - 4:46 pm UTC
Hi Tom,
I have a query that performs bad (did not return after more than one minute).
After I issued the following:
alter session set optimizer_index_cost_adj=5
alter session set optimizer_index_caching=90
The query ran much faster (in several seconds). The system had optimizer_index_cost_adj=100 and optimizer_index_caching=0.
The explain plan was:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 558 1954
SORT AGGREGATE 1 17
INLIST ITERATOR
INDEX RANGE SCAN JPOST.IDX_RPT_JP_REQ_STATUS_HIST 1 17 3
NESTED LOOPS 1 18 4
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_LOCATION 1 8 1
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1
NESTED LOOPS 1 15 3
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1 5
NESTED LOOPS 1 37 4
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_LOCATION 1 27 1
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1
TABLE ACCESS BY INDEX ROWID JPOST.ACCT_COMPANY 1 7 2
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 1
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 4 1
TABLE ACCESS BY INDEX ROWID JPOST.ACCT_COMPANY 1 25 2
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_JOB_CSTM_TEXT 1 14 3
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_JOB_CSTM_TEXT 1 2
NESTED LOOPS 1 39 5
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 5
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 5
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 5
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 12 2
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 16 2
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 12 2
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 16 2
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
NESTED LOOPS 558 111 K 1954
NESTED LOOPS 381 71 K 1192
NESTED LOOPS OUTER 261 45 K 670
INLIST ITERATOR
TABLE ACCESS BY GLOBAL INDEX ROWID JPOST.JOBPOST_JOB 261 15 K 148 ROWID ROW L
INDEX RANGE SCAN JPOST.IDX_RPT_JOBPOST_JOB_1 261 3
VIEW PUSHED PREDICATE 1 117 2
FILTER
NESTED LOOPS 1 117 411
HASH JOIN 1 86 409
PARTITION RANGE ALL 1 9
TABLE ACCESS BY LOCAL INDEX ROWID JPOST.YCM_CANDIDATE_ACTIVITY 276 9 K 279 KEY KEY
INDEX RANGE SCAN JPOST.IDX_N_YCM_CAND_ACT_AID_ACT_DT 276 249 KEY KEY
TABLE ACCESS BY INDEX ROWID JPOST.YCM_ACTIVITY_REL 302 15 K 129
INDEX RANGE SCAN JPOST.IDX_N_YCM_ACTIVITY_REL_AIAC 1 2
TABLE ACCESS BY GLOBAL INDEX ROWID JPOST.YCM_CANDIDATE 1 31 2 ROWID ROW L
INDEX UNIQUE SCAN JPOST.PK_YCM_CANDIDATE 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_REL_JOB_USER 1 13 2
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_REL_JOB_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_REL_JOB_USER 1 13 2
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_REL_JOB_USER 1 1
After alter session it became:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 558 53
SORT AGGREGATE 1 17
INLIST ITERATOR
INDEX RANGE SCAN JPOST.IDX_RPT_JP_REQ_STATUS_HIST 1 17 1
NESTED LOOPS 1 18 2
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_LOCATION 1 8 1
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1
NESTED LOOPS 1 15 2
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1 5
NESTED LOOPS 1 37 2
INDEX RANGE SCAN JPOST.PK_JP_RELJOBLOC 1 10 3
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_LOCATION 1 27 1
INDEX UNIQUE SCAN JPOST.PK_JP_LOCATION 1
TABLE ACCESS BY INDEX ROWID JPOST.ACCT_COMPANY 1 7 1
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 1
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 4 1
TABLE ACCESS BY INDEX ROWID JPOST.ACCT_COMPANY 1 25 1
INDEX UNIQUE SCAN JPOST.PK_ACCT_COMPANY 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_JOB_CSTM_TEXT 1 14 1
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_JOB_CSTM_TEXT 1 2
NESTED LOOPS 1 39 2
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 2
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 2
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
NESTED LOOPS 1 39 2
INDEX RANGE SCAN JPOST.IDX_RPT_JP_JOB_CSTM_V_DATA 2 18 3
TABLE ACCESS BY INDEX ROWID JPOST.CSTM_VALIDATION_DATA 1 30 1
INDEX UNIQUE SCAN JPOST.PK_CSTM_VALIDATION_DATA 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 12 1
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 16 1
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 12 1
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_USER 1 16 1
INDEX UNIQUE SCAN JPOST.PK_JP_USER 1 1
NESTED LOOPS 558 111 K 53
NESTED LOOPS 381 71 K 34
NESTED LOOPS OUTER 261 45 K 21
INLIST ITERATOR
TABLE ACCESS BY GLOBAL INDEX ROWID JPOST.JOBPOST_JOB 261 15 K 8 ROWID ROW L
INDEX RANGE SCAN JPOST.IDX_RPT_JOBPOST_JOB_1 261 1
VIEW PUSHED PREDICATE 1 117 1
NESTED LOOPS 1 117 3
NESTED LOOPS 1 86 2
INDEX RANGE SCAN JPOST.IDX_RPT_YCM_CAND_ACTIVITY_7 1 34 3
TABLE ACCESS BY INDEX ROWID JPOST.YCM_ACTIVITY_REL 1 52 1
INDEX UNIQUE SCAN JPOST.PK_YCM_ACTIVITY_REL 1
TABLE ACCESS BY GLOBAL INDEX ROWID JPOST.YCM_CANDIDATE 1 31 1 ROWID ROW L
INDEX UNIQUE SCAN JPOST.PK_YCM_CANDIDATE 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_REL_JOB_USER 1 13 1
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_REL_JOB_USER 1
TABLE ACCESS BY INDEX ROWID JPOST.JOBPOST_REL_JOB_USER 1 13 1
INDEX UNIQUE SCAN JPOST.PK_JOBPOST_REL_JOB_USER 1
--------------
the query is:
SELECT
jobpost_job.ACCT_ID
,V_RPT_CSTM_659_ACTIVITY_CHILD.ACCT_ID
,V_RPT_CSTM_659_ACTIVITY_CHILD.CANDIDATE_ACTIVITY_ID
,V_RPT_CSTM_659_ACTIVITY_CHILD.CANDIDATE_ID
,V_RPT_CSTM_659_ACTIVITY_CHILD.FIRST_NAME
,V_RPT_CSTM_659_ACTIVITY_CHILD.LAST_NAME
,V_RPT_CSTM_659_ACTIVITY_CHILD.GENDER_CODE
,V_RPT_CSTM_659_ACTIVITY_CHILD.gender_abbr
,V_RPT_CSTM_659_ACTIVITY_CHILD.RACE_CODE
,V_RPT_CSTM_659_ACTIVITY_CHILD.race_name
,JOBPOST_JOB.JOB_ID
,JOBPOST_JOB.JOB_CODE
,JOBPOST_JOB.JOB_TITLE
,(SELECT MIN(DATE_CREATED)
FROM JOBPOST_REQ_STATUS_HISTORY
WHERE REQ_STATUS_CODE IN (1,3) -- OPEN, REOPEN
AND JOBPOST_REQ_STATUS_HISTORY.JOB_ID = JOBPOST_JOB.JOB_ID)
,JOBPOST_JOB.POSITIONS_TOTAL
,(SELECT JOBPOST_LOCATION.ACTIVE_FLG
FROM JOBPOST_LOCATION
,JOBPOST_REL_JOB_LOCATION
WHERE JOBPOST_JOB.JOB_ID = JOBPOST_REL_JOB_LOCATION.JOB_ID
AND JOBPOST_REL_JOB_LOCATION.LOCATION_ID = JOBPOST_LOCATION.LOCATION_ID)
,(SELECT JOBPOST_LOCATION.LOCATION_ID
FROM JOBPOST_LOCATION
,JOBPOST_REL_JOB_LOCATION
WHERE JOBPOST_JOB.JOB_ID = JOBPOST_REL_JOB_LOCATION.JOB_ID
AND JOBPOST_REL_JOB_LOCATION.LOCATION_ID = JOBPOST_LOCATION.LOCATION_ID)
,(SELECT LOCATION_NAME
FROM JOBPOST_LOCATION
,JOBPOST_REL_JOB_LOCATION
WHERE JOBPOST_JOB.JOB_ID = JOBPOST_REL_JOB_LOCATION.JOB_ID
AND JOBPOST_REL_JOB_LOCATION.LOCATION_ID = JOBPOST_LOCATION.LOCATION_ID)
,(SELECT ACTIVE_FLG
FROM ACCT_COMPANY
WHERE ACCT_COMPANY.ACCT_COMPANY_ID = JOBPOST_JOB.ACCT_COMPANY_ID)
,(SELECT ACCT_COMPANY_ID
FROM ACCT_COMPANY
WHERE ACCT_COMPANY.ACCT_COMPANY_ID = JOBPOST_JOB.ACCT_COMPANY_ID)
,(SELECT COMPANY_NAME
FROM ACCT_COMPANY
WHERE ACCT_COMPANY.ACCT_COMPANY_ID = JOBPOST_JOB.ACCT_COMPANY_ID)
,NVL((SELECT JOBPOST_JOB_CSTM_TEXT.TEXT
FROM JOBPOST_JOB_CSTM_TEXT
WHERE JOBPOST_JOB_CSTM_TEXT.job_id = JOBPOST_JOB.job_id
AND JOBPOST_JOB_CSTM_TEXT.account_field_id = 1097),'') --HIRING MANAGER, CONTRACT_RECRUITER
,NVL((SELECT validation_data_disp
FROM CSTM_VALIDATION_DATA, JOBPOST_JOB_CSTM_VALID_DATA
WHERE CSTM_VALIDATION_DATA.validation_data_id =
JOBPOST_JOB_CSTM_VALID_DATA.validation_data_id
AND JOBPOST_JOB_CSTM_VALID_DATA.job_id = JOBPOST_JOB.job_id
AND CSTM_VALIDATION_DATA.account_field_id = 1090),' ')--function 1
,NVL((SELECT validation_data_disp
FROM CSTM_VALIDATION_DATA, JOBPOST_JOB_CSTM_VALID_DATA
WHERE CSTM_VALIDATION_DATA.validation_data_id =
JOBPOST_JOB_CSTM_VALID_DATA.validation_data_id
AND JOBPOST_JOB_CSTM_VALID_DATA.job_id = JOBPOST_JOB.job_id
AND CSTM_VALIDATION_DATA.account_field_id = 1091),' ')--function 2
,NVL((SELECT validation_data_disp
FROM CSTM_VALIDATION_DATA, JOBPOST_JOB_CSTM_VALID_DATA
WHERE CSTM_VALIDATION_DATA.validation_data_id =
JOBPOST_JOB_CSTM_VALID_DATA.validation_data_id
AND JOBPOST_JOB_CSTM_VALID_DATA.job_id = JOBPOST_JOB.job_id
AND CSTM_VALIDATION_DATA.account_field_id = 1092),' ')--function 3
,NVL((SELECT validation_data_disp
FROM CSTM_VALIDATION_DATA, JOBPOST_JOB_CSTM_VALID_DATA
WHERE CSTM_VALIDATION_DATA.validation_data_id =
JOBPOST_JOB_CSTM_VALID_DATA.validation_data_id
AND JOBPOST_JOB_CSTM_VALID_DATA.job_id = JOBPOST_JOB.job_id
AND CSTM_VALIDATION_DATA.account_field_id = 1096),' ')--HR_MANAGER
,JOB_REL_RECRUITER.USER_ID
,(SELECT USER_FIRSTNAME
FROM JOBPOST_USER
WHERE JOBPOST_USER.USER_ID = JOB_REL_RECRUITER.USER_ID) --RECRUITER_USER.USER_FIRSTNAME
,(SELECT USER_LASTNAME
FROM JOBPOST_USER
WHERE JOBPOST_USER.USER_ID = JOB_REL_RECRUITER.USER_ID)--RECRUITER_USER.USER_LASTNAME
,JOB_REL_MGR.USER_ID
,(SELECT USER_FIRSTNAME
FROM JOBPOST_USER
WHERE JOBPOST_USER.USER_ID = JOB_REL_MGR.USER_ID) --MANAGER_USER.USER_FIRSTNAME
,(SELECT USER_LASTNAME
FROM JOBPOST_USER
WHERE JOBPOST_USER.USER_ID = JOB_REL_MGR.USER_ID) --MANAGER_USER.USER_LASTNAME
,V_RPT_CSTM_659_ACTIVITY_CHILD.ACTIVITY_DISP
,V_RPT_CSTM_659_ACTIVITY_CHILD.ACTIVITY_CODE
,V_RPT_CSTM_659_ACTIVITY_CHILD.ACTIVITY_TYPE_CODE
,V_RPT_CSTM_659_ACTIVITY_CHILD.ACTIVITY_TYPE_DISP
,V_RPT_CSTM_659_ACTIVITY_CHILD.RESULT_STATUS_CODE
,V_RPT_CSTM_659_ACTIVITY_CHILD.RESULT_STATUS_DISP
-- ,YCM_CANDIDATE_ACTIVITY.ACTIVITY_DATE
,NVL(V_RPT_CSTM_659_ACTIVITY_CHILD.ACTIVITY_DATE,trunc(SYSDATE)-1
)
FROM
JOBPOST_JOB
,JOBPOST_REL_JOB_USER JOB_REL_RECRUITER
,JOBPOST_REL_JOB_USER JOB_REL_MGR
, (select
YCM_CANDIDATE_ACTIVITY.ACCT_ID
,YCM_CANDIDATE_ACTIVITY.CANDIDATE_ACTIVITY_ID
,ycm_candidate_activity.JOB_ID
,YCM_CANDIDATE.CANDIDATE_ID
,YCM_CANDIDATE.FIRST_NAME
,YCM_CANDIDATE.LAST_NAME
,YCM_CANDIDATE.GENDER_CODE
,(SELECT GENDER_ABBR
FROM YCM_CODES_GENDER
WHERE YCM_CANDIDATE.GENDER_CODE = YCM_CODES_GENDER.GENDER_CODE) gender_abbr
,YCM_CANDIDATE.RACE_CODE
,(SELECT RACE_NAME
FROM YCM_CODES_RACE
WHERE YCM_CANDIDATE.RACE_CODE = YCM_CODES_RACE.RACE_CODE) race_name
,YCM_ACTIVITY_REL.ACTIVITY_DISP
,YCM_ACTIVITY_REL.ACTIVITY_CODE
,YCM_ACTIVITY_REL.ACTIVITY_TYPE_CODE
,YCM_ACTIVITY_REL.ACTIVITY_TYPE_DISP
,YCM_ACTIVITY_REL.RESULT_STATUS_CODE
,YCM_ACTIVITY_REL.RESULT_STATUS_DISP
,YCM_CANDIDATE_ACTIVITY.ACTIVITY_DATE
from ycm_candidate_activity, ycm_candidate, ycm_activity_rel
where
YCM_CANDIDATE_ACTIVITY.ACCT_ID = YCM_ACTIVITY_REL.ACCT_ID
AND YCM_CANDIDATE_ACTIVITY.ACTIVITY_ID = YCM_ACTIVITY_REL.ACTIVITY_ID
and YCM_CANDIDATE.CANDIDATE_ID = YCM_CANDIDATE_ACTIVITY.CANDIDATE_ID
and YCM_CANDIDATE.ACCT_ID = YCM_CANDIDATE_ACTIVITY.ACCT_ID
--and ycm_candidate_activity.ACCT_ID = 659
) V_RPT_CSTM_659_ACTIVITY_CHILD
WHERE
JOBPOST_JOB.JOB_ID = V_RPT_CSTM_659_ACTIVITY_CHILD.JOB_ID(+)
AND JOBPOST_JOB.ACCT_ID = V_RPT_CSTM_659_ACTIVITY_CHILD.ACCT_ID (+)
AND JOBPOST_JOB.JOB_ID = JOB_REL_RECRUITER.JOB_ID
AND JOB_REL_RECRUITER.JOB_USER_TYPE_CODE = 1
AND JOBPOST_JOB.JOB_ID = JOB_REL_MGR.JOB_ID
AND JOB_REL_MGR.JOB_USER_TYPE_CODE = 2
AND JOBPOST_JOB.REQ_STATUS IN (1,3) --OPEN, REOPEN
AND JOBPOST_JOB.PRODUCTION_CODE = 1
and JOBPOST_JOB.ACCT_ID = 659
----
Because change the index optimizer setting on system can have big impact on other queries. Is there a way to achieve that same query plan by modifying the query?
Thank you!
Dan
To find out the proper vaule for optimizer_index_cost_adj
INDRANIL DAS, February 17, 2009 - 8:49 am UTC
We have database of 15 Terabyte , We are getting the below error while executing the below SQL::
SQL> SELECT a.average_wait AS "Full Scan Read I/O-Waits",
2 b.average_wait AS "Index Read I/O-Waits",
3 a.total_waits / (a.total_waits + b.total_waits)
4 * 100 AS "Full Scan-I/O-Waits %",
5 b.total_waits / (a.total_waits + b.total_waits)
6 * 100 AS "Index Scan-I/O-Waits %",
7 (b.average_wait / a.average_wait)
8 * 100 AS "=> optimizer_index_cost_adj"
9 FROM v$system_event a, v$system_event b
10 WHERE a.event = 'db file scattered read'
11 AND b.event = 'db file sequential read';
(b.average_wait / a.average_wait)
*
ERROR at line 7:
ORA-01476: divisor is equal to zero
Could you please suggest why it's giving the above error?
So we are not able to find out the proper value for optimizer_index_cost_adj.
Thanks for your help in advance.
Indranil
February 17, 2009 - 8:59 am UTC
... To find out the proper vaule for optimizer_index_cost_adj ...
let it default, do not set it.
use dbms_stats to gather system statistics (NOT stats on the sys tables, but system statistics that tell us your multiblock io, single block io, cpu speeds...)
As for why it is giving an error - that is obvious.
You are dividing by zero, a query you wrote or borrowed from someone is blindly dividing by something that happens to be zero.
But that is a good thing, you know the proper value - the DEFAULT value.
Thanks for your Reply Tom!
INDRANIL DAS, February 18, 2009 - 2:55 am UTC
Hi Tom,
Please see that::
select * from v$system_event where event in ('db file sequential read','db file scattered read');
is giving the below info-
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO
db file sequential read 3971099437 0 1739987175 0 17399871748052
db file scattered read 1404068959 0 434046633 0 4340466334933
That's why , my previous QRY failed (division by zero) , My question is why these two vaules are zero in our system, Is there any reason for which these 2 values are zero ?
Compatibility of oracle report with database
Vijay Bajaj, May 11, 2009 - 8:46 am UTC
Our application uses report Builder 9.0.2.0.3 which comes
in a Oracle Developer Suite 9.0.2.0.1 package.
The present database in our application is Oracle 9i.
The database will get upgraded to 11g.
Please assist and let me know if the report existing oracle report builder is compatible with oracle 11g.
Regards,
Vijay bajaj
May 11, 2009 - 5:28 pm UTC
please utilize support for such things?