Regarding referenced paper (and related questions)...
Matt, November 25, 2002 - 5:20 am UTC
The referenced paper identifies a query that can be used to identify the most suitable value for optimizer_index_cost_adj. The query uses v$system_event which requires the parameter timed_statistics to be set to true (to get timing info).
What is the performance cost of setting this value to true?
Additionally, when is it appropriate to set timed_os_statistics to true? The DB reference (9.0.1) indicates that timed_os_statistics will have a large performance hit, but I cannot find documentation to indicate where these stats are stored or how they may be used.
November 25, 2002 - 7:34 am UTC
The performance cost of setting timed_statistics to true is SO FAR OUTWEIGHED by the postive return on having it on that I run all systems with it set to true. You won't even notice it.
I've not really ever used timed_os_statistics (meaning, I haven't found that occasion yet myself ;)
Why cost based optimizer is behaving poorly - comment on part 2 of the answer
Abhijit Bhattacharya, November 30, 2002 - 10:56 am UTC
Tom,
You advise about CBO really worked. PL/SQL which was taking more than 5 minutes to execute with default values of optimizer_index_caching, optimizer_index_cost_adj is now taking 40 seconds to complete. Thanks for the excellent advice.
My comment pertains to section 2 of your reply (your "yuck" comment). The modified block that you presented here may not produce the same answer as original PL/SQL block. Here is the sample data -
DELETE FROM ps_job;
DELETE FROM ps_sal_grade_tbl ;
INSERT INTO ps_job VALUES (1,1,1,'1-Jan-1990') ;
INSERT INTO ps_sal_grade_tbl VALUES (1,1,1,'12-Dec-1991') ;
The original PL/SQL block will print value 0 (for COUNT(*)) since there is a condition in the SELECT-INTO statement effdt <= ps_job.effdt
(SELECT MAX(effdt)
FROM ps_sal_grade_tbl
WHERE setid = a.setid
AND sal_admin_plan = a.sal_admin_plan
AND grade = a.grade
AND effdt <= ps_job_rec.effdt) <------
The modified PL/SQL block you presented will return 1 for COUNT(*) given the above data set.
Do you have any other ideas by which I can rewrite the query for better performance yet without changing its output? These tables with 'ps_' prefix came from Peoplesoft. You are probably aware of the fact that records in large number of Peoplesoft tables are effective dated forcing me to write "ugly" looking SQL/PL-SQL blocks. Your suggestion will be really appreciated.
November 30, 2002 - 4:48 pm UTC
Try selecting a select.
select ...., (select count(*) .... )
from ......
Default values
Kalita, December 02, 2002 - 12:58 pm UTC
Is there any specific reason behind defaulting the value of optimizer_index_caching to 0 and optimizer_index_cost_adj to 100?
The paper was interesting.
December 02, 2002 - 12:59 pm UTC
not that I'm aware of.
Perhaps Oracle Apps is the reason
Robert D. Ware, December 12, 2002 - 2:55 pm UTC
Setting of OPTIMIZER_INDEX_COST_ADJ
Matt, February 10, 2003 - 7:50 pm UTC
In the hotsos article it suggests using v$system event to calculate the value of OPTIMIZER_INDEX_COST_ADJ.
This value represents the relative cost of physical IO for indexed access versus physical IO for full table scans.
The default value of 100 indicates that an indexed access is just as costly (100% of the cost ) as a full access.
SQL> l
1 SELECT event, total_waits waits, total_timeouts timeouts,
2 time_waited total_time, average_wait avg
3 FROM V$SYSTEM_EVENT
4 where event like 'db file s%'
5* order by 4 desc
SQL> /
EVENT WAITS TIMEOUTS TOTAL_TIME AVG
-------------------------- ---------- ---------- ---------- ----------
db file sequential read 2153635 0 4258337 2
db file scattered read 72712 0 265319 4
In the above case:
(db file sequential read/db file scattered read)*100 = (2/4)*100 = 50% [=> OPTIMIZER_INDEX_COST_ADJ = 50]
ie: On average physical IO for index reads takes half the time of physical IO for full table scans. So, the cost of IO for index reads is half that for full table scans and OPTIMIZER_INDEX_COST_ADJ should be set to 50.
Does all the above hang together?
Is 50 a typical value for an 80% DSS 20% OLTP system (currrently using RBO)?
February 11, 2003 - 8:08 am UTC
Actually, 50 is where many people recommend starting and adjusting up or down as you see fit on your system so -- sure.
Relative use of the two parameters
Matt, February 11, 2003 - 5:46 pm UTC
From what I have seen in practice and read about these parameters (optimizer_index_caching/optimizer_index_cost_adj)
The CBO is more sensitive to changes in optimizer_index_cost_adj. I plan to leave this parameter well alone and only change optimizer_index_cost_adj.
Is this the usual way to go, or should both of these be tweaked, typically?
Is there a similar SQL query that could be used to highlight what the setting should be for optimizer_index_caching on a system that has been running some time? (or at least highlight the starting point for testing?
February 12, 2003 - 8:11 am UTC
Actually, I go the other way on this.
index caching is subtle -- less obvious
index cost adj is like hitting it on the head with a hammer, a tad more aggresive.
In reality if your system is running fine, you feel the queries are doing about the right amount of work, setting these is not something you need to do. you do this in reaction to poorly performing situations.
I've found three values to be most useful in most cases for cost adj
o default
o 25'ish
o 10'ish
sort of like turning up the volume.
PL/SQL Cursor - select * from ps_job
Jerry, February 11, 2003 - 10:32 pm UTC
I don't do a lot of PL/SQL, but have worked quite a bit with PeopleSoft. PS_JOB, at least in version 8, literally has hundreds of columns. It also typically has many thousands of rows for a medium to large company. Our company has 500k rows.
Would there be a benefit to using only the columns he needs from PS_JOB in the cursor?
FOR ps_job_rec IN (SELECT *
FROM ps_job) LOOP
February 12, 2003 - 8:17 am UTC
absolutely -- yes, 100%
our OLTP database shows a possible OPTIMIZER_INDEX_COST_ADJ > 100 !!!
William Brejnev, September 12, 2003 - 12:19 am UTC
Our OLTP and non-OLTP databases, currently in RULE mode show the below numbers for "db file sequential read" and "db file scattered read". The ratio for the OLTP one is 135 !! Would it mean that when we move to CBO (this weekend !!) we will need to set OPTIMIZER_INDEX_COST_ADJ to 135 ? I'm suggesting our customer to go with 10 for both OLTP and non-OLTP to start with.
Am I missing something ? Do we need to run the query against the v$system_event only AFTER moving to CBO (choose) ? Would it give us different ratios to set OPTIMIZER_INDEX_COST_ADJ ?
Tom, need your help here ...
Thanks !!
SELECT event, total_waits waits, total_timeouts timeouts,
time_waited total_time, average_wait avg
FROM V$SYSTEM_EVENT
where event like 'db file s%'
order by 4 desc ;
(db file sequential read/db file scattered read)*100 = OPTIMIZER_INDEX_COST_ADJ = 25
******************************************************************************************
******************************************************************************************
This is the OLTP database
SQL> connect system/######@cup1cl
EVENT WAITS TIMEOUTS TOTAL_TIME AVG
-------------------------- -------------------- ---------- -------------------- ----------
db file parallel read 11,805 0 4,700 .398136383
db file parallel write 9,830,033 868 90,429,755 9.19933382
db file scattered read 217,910,796 0 6,196,974,557 28.4381255
db file sequential read 2,025,336,069 0 77,758,266,508 38.3927723
db file single write 147,667 0 84,086 .569429866
(38/28)*100 = 135
******************************************************************************************
******************************************************************************************
This is not an OLTP database (keeps phone call detail records)
SQL> connect system/######@usp1cl
Connected.
SQL> /
EVENT WAITS TIMEOUTS TOTAL_TIME AVG
-------------------------- -------------------- ---------- -------------------- ----------
db file parallel read 7,103 0 3,196 .449950725
db file parallel write 1,142,980 74 38,033,440 33.2756829
db file scattered read 95,004,952 0 1,714,552,258 18.0469778
db file sequential read 80,075,541 0 33,992,534 .42450583
db file single write 51,036 0 58,247 1.14129242
(.4/18)*100 = 2
******************************************************************************************
******************************************************************************************
This is not an OLTP database (keeps phone call detail records)
SQL> connect system/######@usp2cl
Connected.
SQL> /
EVENT WAITS TIMEOUTS TOTAL_TIME AVG
-------------------------- -------------------- ---------- -------------------- ----------
db file parallel read 12,218 0 4,774 .390734981
db file parallel write 1,730,443 138 31,969,755 18.4748963
db file scattered read 40,372,613 0 8,460,756,268 209.566725
db file sequential read 88,052,150 0 55,670,052 .632239553
db file single write 137,113 0 70,884 .51697505
(.63/209)*100 = 0.3
******************************************************************************************
******************************************************************************************
September 12, 2003 - 9:56 am UTC
you are moving to CBO this weekend and you are just trying to figure these things out today?
you have a recipe for a really long week coming up -- I forecast lots of anger and frustration.
any chance you can reschedule? so you can
o set up a test environment
o run your stuff
o test it to SCALE
o have your performance tuning exercise performed in private, without end users screaming at you
and then roll it out??
our OLTP database shows a possible OPTIMIZER_INDEX_COST_ADJ > 100 !!!
William Brejnev, September 17, 2003 - 12:00 am UTC
Tom, It seems we were able to reschedule it for 2 weeks from now. We have done tests against the application data layers to generate all the query plans (suing CHOOSE) and compare them with the same query plans taken with the DB in RULE mode. We have seen that setting optimizer_ind_cost_adj to 10 gave us a good response of queries making correct use of the indexes in lieu of full table scans (note that this is mainly an OLTP DB).
Actually, my main doubts I sent you before were about :
(1) how could we have a factor of 135 for an OLTP DB ?
(2) should we run this magic query against v$system_event only after moving to CBO or would it give the same results with RBO ?
(3) is this query against v$system_event reliable to find out what would be the current/desired optimizer_ind_cost_adj ? Why it shows me 135 for an OLTP DB and 2 for a batch DB ?
(4) and would it be better to have FIRST_ROWS with optimizer_ind_cost_adj set to 1 for an OLTP database ? I know one big company who did it successfully.
I kept my suggestion to the customer to have CHOOSE and optimizer_ind_cost_adj=10, for them to start their volume tests. I don't have means to do volume test on our local DB and we used the method of verifying the query plans on our datalayers, as I mentioned on the top.
If there is one thing I want to avoid is the insatisfaction of our customer. Hope our suggestion will work fine, I don't see why it wouldn't.
But I'm sure you would be able to tell me so.
Thanks again,
William Brejnev
September 17, 2003 - 6:57 am UTC
1) i don't like ratios. and as much as I hate ROT (rules of thumb) i was bold enough to offer up two extremes in my latest book:
You should consider setting these to nondefault values for many systems, or at least testing the two extremes:
o The default settings of OPTIMIZER_INDEX_CACHING = 0 and OPTIMIZER_INDEX_COST_ADJ = 100. These are typically appropriate for many data warehouse/reporting systems.
o The settings of OPTIMIZER_INDEX_CACHING = 90 and OPTIMIZER_INDEX_COST_ADJ = 25. These are typically appropriate for many transactional/OLTP systems.
2) that ratio could be used to give you a starting point (not even sure if you have a valid ratio formula there, don't really use it).
3) i don't think so, could be, not sure, haven't really investigated it myself.
4) better? after TESTING and being happy with the results -- it could be.
A reader, March 24, 2004 - 4:42 pm UTC
Hi Tom I was reading articles like </code>
http://www.dbazine.com/jlewis18.shtml <code>
As I understand the best you can get to set optimizer_index_cost_adj and caching values
are ratios to estimate them.
Could you please giveme the ones for 9i,
and any other advice to set that parameters,please
Or the only thing you have are starting ratios and testing around them.
Thanks :)
March 24, 2004 - 6:29 pm UTC
I think those two have two, maybe three, settings.
way to the right (defaults)
way to the left
right in the middle
you can play with ratios but I think you'll find
0,100 (default)
90,10 (way to the left)
50,50 (maybe, hypothesis)
are what you might be looking for. something as fine grained as "82,17"... I don't think I'd spend the time.
A reader, March 24, 2004 - 7:17 pm UTC
Thanks Tom,
the point is the following
in your new book (effective oracle by design ) I found using system statistics, but what I want is to get an average, my system is a small oltp and dds, so I can take statistics from one process only.
I supposed ratios could giveme an average value to set both parameter.
I had been reading several articles and there is not a perfect value, but I was wondering if there could be a way to find the average values for all the process in the day.
Or a way to gather system statics, several hours
Thanks Tom again
March 25, 2004 - 8:46 am UTC
statspack gives you averages over all processes.
but I'll just say "to the left for oltp perhaps" and "to the right for dss perhaps" and you can use alter sessions to let each have their own way (eg: write a "startup" package that you call and have people call startup.oltp or startup.dss after connecting for example..)
How to set values using statpacks?
A reader, March 25, 2004 - 9:46 am UTC
Thanks Toms, can you please help me with this, once I got statpacks reports, based on which parameter and using which formula I give a value to both parameters
OPTIMIZER_INDEX_COST_ADJ =
OPTIMIZER_INDEX_CACHING =
:) Thanks Tom.
After reading a lot about this, this is my conclusion, do you agree?
the conclusion seems to be gather statics from every distinct process or group of process, and set manually this values in every one of them.
If this process is by time, for example night and day, you can use alter
system parameter,
if this is only for ap articular process you will have to set alter session.
And set the value.
This seems to be a hard work.
March 25, 2004 - 9:00 pm UTC
sorry -- as i said -- i see two, maybe three settings regardless.
way to the left
way to the right
based on the type of application (which you already know)... the end.
A reader, March 25, 2004 - 11:04 am UTC
Tom something other people had been saying me is
do not set both of them parameters.
What do you think about this
For example
----- Original Message -----
From: "Cary Millsap" <cary.millsap@hotsos.com>
To: "'Juan Cachito Reyes Pacheco'" <jreyes@dazasoftware.com>
Cc: "'Jared Still'" <jkstill@cybcon.com>
Sent: Thursday, March 25, 2004 11:41 AM
Subject: RE: how to set optimizer_index_cost_adj and cache parameters?
> I agree with what I believe Jared has said. The correct procedure is not
> to mess with the parameters. Adjusting the parameters is a futile
> attempt to respond to the symptoms. If you have query optimization
> problems, address them in the more productive way, by fixing your SQL,
> ensuring that you have representative statistics, etc.
>
>
> Cary Millsap
> Hotsos Enterprises, Ltd.
> </code>
http://www.hotsos.com <code>
> * Nullius in verba *
March 25, 2004 - 9:09 pm UTC
I have not seen the entire thread so cannot comment out of context (it might be considered "rude" to post private emails, don't you think?)
A reader, March 26, 2004 - 8:07 am UTC
Hi Tom, I agree with you, to copy private emails are "rude", but they are not private emails, and all the informatoin posted here is the same you can find in oracle-l list if you subscribe there.
You can search here:
</code>
http://www.freelists.org/archives/oracle-l/03-2004/index.html <code>
I started two thread trying to find a definitive answer and as I couldn't get it, I decided to ask to you.
What they say briefly is that "don't touch them", and they seems to be right based in the fact that several dbas had and have correct execution plans without changing that values, but any way you can to see the thread you can search here.
Please Tom, I had been investigating long time to find a definitive ansewr about the use of both parameters,
I understand your are not god neither know all the answers but could you please give your position about the use of both parameters.
There are two opinions
"Don't use this parameters, unless you see you need"
"Set them accordingly to the kind of processing you do"
March 26, 2004 - 9:31 am UTC
haven't I already done that?
way to the left
way to the right
done in TEST and DEV (not just thrown out on prod).
A reader, March 26, 2004 - 2:18 pm UTC
Thanks
Not going for FAST FULL SCAN
Sami, May 17, 2004 - 9:34 am UTC
Dear Tom,
For the below query I wanted to read everything from composite index e1_idx01 rather than touching the table.
So I did,
create index e1_idx01 on e1(column1,column2,column3,column4);
analyze table e1 compute statistics for table for all indexes for all indexed columns ;
select * from (
select e.COLUMN4,
p.processed_by,
p.last_name,
p.first_name,
p.company_name,
c.countryname,
e.COLUMN2,
to_char(e.COLUMN3,'dd mon, yy ')||'at '||to_char(e.COLUMN3,'hh24:mi ')||'gmt' formattedlastupdatedate,
p.userid
from p1 p,
e1 e,
c1 c
where p.p1id= e.COLUMN1
and p.business_country_id = c.countryabbrev
and p.hsbc_user_category in ('gib','hibm')
and p.region in(4049)
and e.COLUMN4 in (1,4,5)
order by e.COLUMN3 desc) where rownum <=6
;
But still I see
9 4 TABLE ACCESS (BY INDEX ROWID) OF 'E1' (Cost=2 Card=6177 Bytes=315027)
10 9 INDEX (UNIQUE SCAN) OF 'E1_PK' (UNIQUE) (Cost=1 Card=6177)
instead of
INDEX (FAST FULL SCAN) OF 'E1_IDX01'
Is there any specific reason why oracle is still going for table access even though it can find everything in E1_IDX01 index?
May 17, 2004 - 3:08 pm UTC
need entire example.
it was apparently saying "primary key access better than ffs of index" which could be very reasonable indeed.
optimizer_index_cost_adj
Branka, July 14, 2004 - 5:24 pm UTC
I tried to change value for optimizer_index_cost_adj, but I get following error.
What am I doing wrong?
sys@TPROD3> alter system set optimizer_index_cost_adj=15;
alter system set optimizer_index_cost_adj=15
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option
July 15, 2004 - 11:49 am UTC
it is saying "you cannot alter system that one".
ops$tkyte@ORA9IR2> exec print_table( 'select * from v$parameter where name = ''optimizer_index_cost_adj'' ' );
NUM : 689
NAME : optimizer_index_cost_adj
TYPE : 3
VALUE : 100
ISDEFAULT : TRUE
ISSES_MODIFIABLE : TRUE
ISSYS_MODIFIABLE : FALSE
ISMODIFIED : FALSE
ISADJUSTED : FALSE
DESCRIPTION : optimizer index cost adjustment
UPDATE_COMMENT :
-----------------
PL/SQL procedure successfully completed.
it can be modified in the session (isses_modifiable), but is not SYSTEM modifiable.
A reader, July 19, 2004 - 11:57 am UTC
what is optimizer_index_cost_adj for ? Do we need to adjust this value? And how do I decide on it.
Thanks
July 19, 2004 - 12:12 pm UTC
did you read the original answer and related links??
Cost
Shivaswamy, October 27, 2004 - 1:40 pm UTC
Tom,
FOr long I was not looking at the COst of a query in the plan. But off late, I think at least there may be linear relationship between cost and execution time.
But I came across a job, which has a cost of 664. And runs for hours, on our not very busy hybrid system(12 CPU). It has run into double digit hour now. Incidentally, it is on Index Fast full scan on an index, whose table has less than a million rows. V$longup suggests it may take still 2000 minutes and has finished 1000 minutes so far. I primarily see following events - transaction tables consistent reads -- undo records applied, consistent changes,consistent gets - examination,consistent gets,session logical reads -- hitting 8 digit values since I started looking about 3 hours ago. Posted event for the session is db file sequential read.
Utlxls gives me
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-----------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 144 | 1094 (1)|
| 1 | SORT GROUP BY | | 1 | 144 | 1094 (1)|
|* 2 | TABLE ACCESS BY INDEX ROWID| IPP020 | 1 | 29 | 2 (50)|
| 3 | NESTED LOOPS | | 1 | 144 | 1091 (0)|
|* 4 | HASH JOIN | | 1431 | 160K| 375 (0)|
|* 5 | HASH JOIN | | 3731 | 247K| 161 (0)|
| 6 | INDEX FAST FULL SCAN | CLTCOF1_IDX3 | 3333 | 46662 | 1 (0)|
| 7 | TABLE ACCESS FULL | EAPACTIVITY | 343K| 17M| 146 (0)|
|* 8 | INDEX FAST FULL SCAN | IPP566_IDX1 | 121K| 5582K| 199 (0)|
|* 9 | INDEX RANGE SCAN | IPP020_IDX1 | 1 | | |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(("IP20"."IPCTBL"='002' OR "IP20"."IPCTBL"='009' OR
"IP20"."IPCTBL"='022' OR "IP20"."IPCTBL"='027' OR "IP20"."IPCTBL"='030' OR
"IP20"."IPCTBL"='039' OR "IP20"."IPCTBL"='042' OR "IP20"."IPCTBL"='049') AND
"IP20"."IPCTBL"="IP56"."LTVTID" AND "IP20"."IPCCHC"="IP56"."LTVVAL")
4 - access("EAP"."COMPANYNUM"="IP56"."LTVCO" AND
"EAP"."CONTRACTID"="IP56"."LTVCON")
5 - access("EAP"."COMPANYNUM"="COF1"."CCDNUM" AND
"EAP"."CONTRACTID"="COF1"."CCDCTN")
8 - filter("IP56"."LTVTID"='002' OR "IP56"."LTVTID"='009' OR
"IP56"."LTVTID"='022' OR "IP56"."LTVTID"='027' OR "IP56"."LTVTID"='030' OR
"IP56"."LTVTID"='039' OR "IP56"."LTVTID"='042' OR "IP56"."LTVTID"='049')
9 - access("EAP"."CASENUM"="IP20"."IPCCAS" AND
"EAP"."EPISODENUM"="IP20"."IPCEPS" AND "EAP"."ENCOUNTERNUM"="IP20"."IPCENC")
If it is IFFS and I have optimizer_index_caching=80 & optimizer_index_cost_adj=20, how is it possible, it runs soo long? Where else can I look?
Thanks in advance,
October 27, 2004 - 3:47 pm UTC
you do understand that those two parameters only change numbers for the optimzier - only make indexes look cheaper or more expensive.
they have NOTHING whatsoever to do with runtime performance of index operations *at all*
they are not anything like db_file_multiblock_read_count. that actually changes the performance of a full scan.
All you accomplish by setting them is making index scans (in this case, given your settings) cheaper as far as the CBO is concerned.
consider:
ops$tkyte@ORA9IR2> set echo on
ops$tkyte@ORA9IR2> set linesize 121
ops$tkyte@ORA9IR2> /*
DOC>drop table t;
DOC>
DOC>create table t
DOC>as
DOC>select * from all_objects order by reverse(object_id);
DOC>
DOC>create index t_idx on t(object_id);
DOC>exec dbms_stats.gather_table_stats( user, 'T', cascade => true );
DOC>*/
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set optimizer_index_caching = 0;
Session altered.
ops$tkyte@ORA9IR2> alter session set optimizer_index_cost_adj = 100;
Session altered.
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from t where object_id < 45;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=38 Card=36 Bytes=3348)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=38 Card=36 Bytes=3348)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=36)
ops$tkyte@ORA9IR2> select * from t where object_id < 50;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=40 Card=40 Bytes=3720)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=40 Card=40 Bytes=3720)
ops$tkyte@ORA9IR2> set autotrace off;
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set optimizer_index_caching = 80;
Session altered.
ops$tkyte@ORA9IR2> alter session set optimizer_index_cost_adj = 20;
Session altered.
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from t where object_id < 45;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=8 Card=36 Bytes=3348)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=8 Card=36 Bytes=3348)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=36)
ops$tkyte@ORA9IR2> select * from t where object_id < 50;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=9 Card=40 Bytes=3720)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=9 Card=40 Bytes=3720)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=40)
ops$tkyte@ORA9IR2> set autotrace off;
<b>Now, since the optimizer_index parameters just change the costing model of the CBO, and don't affect runtime performance of index operations *at all* -- is that second "where x < 45" really going to run faster with a cost of 8 than the original with a cost of 38? (rhetorical question, answer = no)</b>
But I would ask you -- are your stats upto date there? the cost of 1 for the index fast full scan which you say is taking hours is very low. doesn't see high enough for an IFF of a really large segment.
Shivaswamy, October 27, 2004 - 5:30 pm UTC
Tom,
Thanks for your reply. Yes, I understand that, it only affects Cost Model and not the actual performance.
And on the second point, Yes Tom, the stats are current. But the actual IFFS is the second one (ID#8) with a cost of 199. Yes, the table has the latest stats.
1 select table_name, num_rows, to_char(last_analyzed, 'DD-MON-YY HH24:MI:SS') A
2 from dba_tables
3 where table_name like upper('IPP566')
4* order by owner, table_name
17:04:25 >/
Last
TABLE_NAME NUM_ROWS Analyzed
------------------------------ ---------------- --------------------
IPP566 754,333 26-OCT-04 13:26:09
The longop details is:
16:57:26 >l
1 SELECT message, L.OPNAME, elapsed_seconds/60 "Min Spent", (L.SOFAR/L.TOTALWORK)*100 "Percent",
2 FROM V$SESSION_LONGOPS L, V$SESSION S
3 WHERE L.SID=S.SID
4 AND S.STATUS = 'ACTIVE'
5 and time_remaining/60 > 0
6* and l.sid=307
16:57:29 >/
MESSAGE OPNAME Min Spent Percent Remaining Min
------------------------------ ---------------- ----------- ---------- -------------
Index Fast Full Scan: DW_REPU Index Fast Full 1,472.68 37.9098989 2412.01
SER.IPP566: 1237 out of 3263 B Scan
locks done
You don't want to look at "transaction tables consistent reads - undo records applied"?
16:59:43 >l
1 select v.name, (v.value - t.value) V
2 from v, t
3 where v.name = t.name
4 and (v.value-t.value) > 0
5* order by 2;
--> T & V are two snapshots of the stats of the session, about 6 hours apart.
--> Snipped
NAME Values
---------------------------------------------------------- ----------------
It has done about 28 blocks in the past 6 hours or so.
buffer is pinned count 11,747,918
transaction tables consistent reads - undo records applied 28,537,112
consistent changes 28,554,162
consistent gets - examination 29,040,092
consistent gets 30,080,473
session logical reads 30,080,474
Thank you for your help.
October 27, 2004 - 6:13 pm UTC
<quote>
If it is IFFS and I have optimizer_index_caching=80 &
optimizer_index_cost_adj=20, how is it possible, it runs soo long? Where else
can I look?
</quote>
well, that comment confused me, it sounded like you were implying there is something about optimizer_index_* and the numbers that would make an IFF *faster* (the way you have them set would actually reduce the odds of the IFF being chosen, but it would not make it run any faster)
no idea what "v" and "t", so no, don't want to look at them.
could well be that you have a really small hash area size/pga_aggregate_target and are basically "swapping" like mad.
could also be that you are taking a long time to read a really really "hot" table with lots of modifications.
whats going on to this table whilst you are doing this query. v$session_wait might be interesting for you to look at as well.
worthless discussion?
Mikito harakiri, October 27, 2004 - 6:05 pm UTC
Being not sure if this thread is worth anything, I just searched the word "fallacies". Yep no references to Wolfgang Breitling paper "Fallacies of the Cost Based Optimizer". This is a required reading!
1 star to Mikito assumption:
Franco, October 28, 2004 - 7:09 am UTC
Stairs are useless because somebody fall down sometimes and breaks a leg.
:-)
Details
Shivaswamy, October 28, 2004 - 9:36 am UTC
Tom,
Here are the details. Sorry for not being clear.
At time = t1:
create table t as (
SELECT A.NAME, B.VALUE
FROM V$STATNAME A, V$SESSTAT B
WHERE B.SID=307 -- SID Running IFFS forever
AND A.STATISTIC# =B.STATISTIC# AND VALUE > 0)
ORDER BY VALUE DESC;
create view v as ( SELECT A.NAME, B.VALUE FROM V$STATNAME A, V$SESSTAT B
WHERE B.SID=307 -- SID Running IFFS forever
AND A.STATISTIC# =B.STATISTIC# AND VALUE > 0)
ORDER BY VALUE DESC;
At time =t1+ ~20Hrs:
col v for 999,999,999,999 heading Values
select v.name, (v.value - t.value) V
from v, t
where v.name = t.name
and (v.value-t.value) > 0
order by 2;
NAME Values
---------------------------------------------- ----------------
calls to get snapshot scn: kcmgss 1
db block gets 1
enqueue releases 7
enqueue requests 7
cleanouts only - consistent read gets 11
CR blocks created 164
current blocks converted for CR 232
transaction tables consistent read rollbacks 425
prefetched blocks 440
physical writes 456
physical writes direct 456
physical writes non checkpoint 456
cleanouts and rollbacks - consistent read gets 728
cleanout - number of ktugct calls 739
immediate (CR) block cleanout applications 739
active txn count during cleanout 1,046
commit txn count during cleanout 1,823
rollbacks only - consistent read gets 5,712
pinned buffers inspected 8,816
data blocks consistent reads - undo records ap 27,514
plied
messages sent 29,673
dirty buffers inspected 74,438
free buffer inspected 87,233
hot buffers moved to head of LRU 215,661
prefetched blocks aged out before use 657,428
index scans kdiixs1 691,399
buffer is not pinned count 2,137,514
no work - consistent read gets 2,181,176
physical reads 9,706,614
free buffer requested 9,706,861
calls to kcmgas 9,707,002
shared hash latch upgrades - no wait 9,920,344
table fetch by rowid 17,383,096
buffer is pinned count 33,320,077
transaction tables consistent reads - undo rec 138,449,409
ords applied
consistent changes 138,477,663
consistent gets - examination 139,856,466
consistent gets 142,733,544
session logical reads 142,733,545
39 rows selected.
And the V$SESSION_WAIT: I keep getting the same event for different P1s. Mainly for the datafile but also for the datafile of the Undo.
SELECT w.sid, w.event, w.P1TEXT, w.P1, w.SECONDS_IN_WAIT
FROM V$SESSION_WAIT w, v$session s
WHERE STATE LIKE 'WAITING'
AND EVENT NOT LIKE 'SQL*Net message%' and event not like 'rdbms ipc%'
and event not like 'slave wait'
and s.username is not null
and event not like 'pmon timer'
and event not like 'smon timer'
and event not like 'slave waits'
and w.sid = s.sid
and s.sid=307
ORDER BY 3;
SID EVENT P1TEXT P1 SECONDS_IN_WAIT
--- ------------------------------ -------------------- ---------- ---------------
307 db file sequential read file# 21 0
And the two parameters:
09:09:09 >show parameter workarea
NAME TYPE VALUE
------------------------------------ -------------------------------- --------------
workarea_size_policy string AUTO
09:09:17 >show parameter pga_agg
NAME TYPE VALUE
------------------------------------ -------------------------------- --------------
pga_aggregate_target unknown 1073741824
This table is not busy otherwise. Not used by other processes.
Any feedback?
October 28, 2004 - 12:33 pm UTC
table must be being used by other processes if the ffs of the index is doing lots of undo application, there is no other cause.
(optimizer) Compatible 8.1.7
Shivaswamy, October 29, 2004 - 3:35 pm UTC
Tom,
I set the compatible to 8.1.7 and I am able to run the same query (without insert) in less than three hours on the same system.
715548 rows selected.
Elapsed: 02:151:9060.58
This is against not completed even after about 24 hours on compatible=9.2.0.0.
Care to advise?
Shivaswamy
October 29, 2004 - 5:09 pm UTC
sorry but "compatible" won't affect 'read consistency'.
if you think this is an issue with reads to rollback -- you must be updating the data whilst reading it, no other answer.
Query Under RBO Uses Good Index, Under CBO uses Bad Index
A reader, April 26, 2005 - 5:18 pm UTC
What may be the reasons for this? When I add the Index hint, the query uses the good index in CBO. But I would prefer to not use any hints.
What should I look into, (or is this too vague)?
April 26, 2005 - 5:40 pm UTC
too vague, lets see the autotrace traceonly explain and the TKPROF of both queries. (assuming they are small enough from top to bottom -- eg; if this is a 15 page explain plan -- way too big)
A reader, June 10, 2005 - 11:24 am UTC
Tom,
Can you please help me to under stand this behavior?
The query
SELECT /*+ rule */ 1 FROM DBA_EXTENTS WHERE ROWNUM = 1;
completes in 34 sec. And it takes over 30 minutes if rule hint is not used. This is on Development database. A Test instance of the same database completes either version of the query in under a second. Both the instances have "close" init parameters.
The plan on Dev for the query with rule hint is:
SQL> @pqplan2 a1
Execution Plan....................................................
QUERY_PLAN CARDINALITY COST
---------------------------------------------------------------------------------- ------------------------ ---------------
SELECT STATEMENT
1-2.1 COUNT STOPKEY
2-3.1 VIEW "DBA_EXTENTS"
3-4.1 UNION-ALL
4-5.1 NESTED LOOPS
5-6.1 NESTED LOOPS
6-7.1 VIEW "SYS_DBA_SEGS"
7-8.1 UNION-ALL
8-9.1 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 NESTED LOOPS
12-13.1 NESTED LOOPS
13-14.1 VIEW "SYS_OBJECTS"
14-15.1 UNION-ALL
15-16.1 TABLE ACCESS FULL "TAB$"
15-16.2 TABLE ACCESS FULL "TABPART$"
15-16.3 TABLE ACCESS FULL "CLU$"
15-16.4 TABLE ACCESS FULL "IND$"
15-16.5 TABLE ACCESS FULL "INDPART$"
15-16.6 TABLE ACCESS FULL "LOB$"
15-16.7 TABLE ACCESS FULL "TABSUBPART$"
15-16.8 TABLE ACCESS FULL "INDSUBPART$"
15-16.9 TABLE ACCESS FULL "LOBFRAG$"
13-14.2 TABLE ACCESS BY INDEX ROWID "OBJ$"
14-15.1 INDEX UNIQUE SCAN "I_OBJ1" UNIQUE
12-13.2 TABLE ACCESS CLUSTER "SEG$"
13-14.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
11-12.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
8-9.2 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 NESTED LOOPS
12-13.1 TABLE ACCESS FULL "UNDO$"
12-13.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
11-12.2 TABLE ACCESS CLUSTER "SEG$"
12-13.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
8-9.3 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 TABLE ACCESS FULL "FILE$"
11-12.2 TABLE ACCESS CLUSTER "SEG$"
12-13.1 INDEX RANGE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
6-7.2 TABLE ACCESS CLUSTER "UET$"
7-8.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
QUERY_PLAN CARDINALITY COST
---------------------------------------------------------------------------------- ------------------------ ---------------
5-6.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
4-5.2 MERGE JOIN
5-6.1 SORT JOIN
6-7.1 NESTED LOOPS
7-8.1 FIXED TABLE FULL "X$KTFBUE"
7-8.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
5-6.2 SORT JOIN
6-7.1 VIEW "SYS_DBA_SEGS"
7-8.1 UNION-ALL
8-9.1 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 NESTED LOOPS
12-13.1 NESTED LOOPS
13-14.1 VIEW "SYS_OBJECTS"
14-15.1 UNION-ALL
15-16.1 TABLE ACCESS FULL "TAB$"
15-16.2 TABLE ACCESS FULL "TABPART$"
15-16.3 TABLE ACCESS FULL "CLU$"
15-16.4 TABLE ACCESS FULL "IND$"
15-16.5 TABLE ACCESS FULL "INDPART$"
15-16.6 TABLE ACCESS FULL "LOB$"
15-16.7 TABLE ACCESS FULL "TABSUBPART$"
15-16.8 TABLE ACCESS FULL "INDSUBPART$"
15-16.9 TABLE ACCESS FULL "LOBFRAG$"
13-14.2 TABLE ACCESS BY INDEX ROWID "OBJ$"
14-15.1 INDEX UNIQUE SCAN "I_OBJ1" UNIQUE
12-13.2 TABLE ACCESS CLUSTER "SEG$"
13-14.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
11-12.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
8-9.2 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 NESTED LOOPS
12-13.1 TABLE ACCESS FULL "UNDO$"
12-13.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE
11-12.2 TABLE ACCESS CLUSTER "SEG$"
12-13.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
8-9.3 NESTED LOOPS
9-10.1 NESTED LOOPS
10-11.1 NESTED LOOPS
11-12.1 TABLE ACCESS FULL "FILE$"
11-12.2 TABLE ACCESS CLUSTER "SEG$"
12-13.1 INDEX RANGE SCAN "I_FILE#_BLOCK#" NON-UNIQUE
10-11.2 TABLE ACCESS CLUSTER "TS$"
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE
9-10.2 TABLE ACCESS CLUSTER "USER$"
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE
Partition access information......................................
Parallel operations information.......................................
SQL> set echo off
SQL> @pqplan2 a2
Execution Plan....................................................
QUERY_PLAN CARDINALITY COST
---------------------------------------------------------------------------------- ------------------------ ---------------
SELECT STATEMENT Cost = 27 1 27
1-2.1 COUNT STOPKEY
2-3.1 VIEW "DBA_EXTENTS" 2 27
3-4.1 UNION-ALL
4-5.1 NESTED LOOPS 1 2
5-6.1 NESTED LOOPS 18 2
6-7.1 VIEW "SYS_DBA_SEGS" 1 1
7-8.1 UNION-ALL
8-9.1 NESTED LOOPS 1 244
9-10.1 NESTED LOOPS 1 243
10-11.1 HASH JOIN 1 242
11-12.1 NESTED LOOPS 1 25
12-13.1 NESTED LOOPS 1 24
13-14.1 TABLE ACCESS FULL "SEG$" 164 24
13-14.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
12-13.2 TABLE ACCESS CLUSTER "TS$" 1 1
13-14.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
11-12.2 VIEW "SYS_OBJECTS" 98,814 216
12-13.1 UNION-ALL
13-14.1 TABLE ACCESS FULL "TAB$" 164 24
13-14.2 TABLE ACCESS FULL "TABPART$" 16,360 24
13-14.3 TABLE ACCESS FULL "CLU$" 16,360 24
13-14.4 TABLE ACCESS FULL "IND$" 164 24
13-14.5 TABLE ACCESS FULL "INDPART$" 16,360 24
13-14.6 TABLE ACCESS FULL "LOB$" 326 24
13-14.7 TABLE ACCESS FULL "TABSUBPART$" 16,360 24
13-14.8 TABLE ACCESS FULL "INDSUBPART$" 16,360 24
13-14.9 TABLE ACCESS FULL "LOBFRAG$" 16,360 24
10-11.2 TABLE ACCESS BY INDEX ROWID "OBJ$" 2 1
11-12.1 INDEX UNIQUE SCAN "I_OBJ1" UNIQUE 1
9-10.2 TABLE ACCESS CLUSTER "USER$" 1 1
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
8-9.2 NESTED LOOPS 1 27
9-10.1 NESTED LOOPS 1 26
10-11.1 NESTED LOOPS 1 25
11-12.1 NESTED LOOPS 1 24
12-13.1 TABLE ACCESS FULL "UNDO$" 818 24
12-13.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
11-12.2 TABLE ACCESS CLUSTER "SEG$" 1 1
12-13.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE 1
10-11.2 TABLE ACCESS CLUSTER "USER$" 1 1
11-12.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
9-10.2 TABLE ACCESS CLUSTER "TS$" 1 1
10-11.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
8-9.3 NESTED LOOPS 1 26
9-10.1 NESTED LOOPS 1 26
10-11.1 NESTED LOOPS 1 25
11-12.1 TABLE ACCESS FULL "SEG$" 1 24
11-12.2 TABLE ACCESS CLUSTER "USER$" 1 1
12-13.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
10-11.2 TABLE ACCESS CLUSTER "TS$" 1 1
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
9-10.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
6-7.2 TABLE ACCESS CLUSTER "UET$" 606 1
7-8.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE 1
5-6.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
4-5.2 NESTED LOOPS 1 25
QUERY_PLAN CARDINALITY COST
---------------------------------------------------------------------------------- ------------------------ ---------------
5-6.1 NESTED LOOPS 1 25
6-7.1 VIEW "SYS_DBA_SEGS" 1 1
7-8.1 UNION-ALL
8-9.1 NESTED LOOPS 1 244
9-10.1 NESTED LOOPS 1 243
10-11.1 HASH JOIN 1 242
11-12.1 NESTED LOOPS 1 25
12-13.1 NESTED LOOPS 1 24
13-14.1 TABLE ACCESS FULL "SEG$" 164 24
13-14.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
12-13.2 TABLE ACCESS CLUSTER "TS$" 1 1
13-14.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
11-12.2 VIEW "SYS_OBJECTS" 98,814 216
12-13.1 UNION-ALL
13-14.1 TABLE ACCESS FULL "TAB$" 164 24
13-14.2 TABLE ACCESS FULL "TABPART$" 16,360 24
13-14.3 TABLE ACCESS FULL "CLU$" 16,360 24
13-14.4 TABLE ACCESS FULL "IND$" 164 24
13-14.5 TABLE ACCESS FULL "INDPART$" 16,360 24
13-14.6 TABLE ACCESS FULL "LOB$" 326 24
13-14.7 TABLE ACCESS FULL "TABSUBPART$" 16,360 24
13-14.8 TABLE ACCESS FULL "INDSUBPART$" 16,360 24
13-14.9 TABLE ACCESS FULL "LOBFRAG$" 16,360 24
10-11.2 TABLE ACCESS BY INDEX ROWID "OBJ$" 2 1
11-12.1 INDEX UNIQUE SCAN "I_OBJ1" UNIQUE 1
9-10.2 TABLE ACCESS CLUSTER "USER$" 1 1
10-11.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
8-9.2 NESTED LOOPS 1 27
9-10.1 NESTED LOOPS 1 26
10-11.1 NESTED LOOPS 1 25
11-12.1 NESTED LOOPS 1 24
12-13.1 TABLE ACCESS FULL "UNDO$" 818 24
12-13.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
11-12.2 TABLE ACCESS CLUSTER "SEG$" 1 1
12-13.1 INDEX UNIQUE SCAN "I_FILE#_BLOCK#" NON-UNIQUE 1
10-11.2 TABLE ACCESS CLUSTER "USER$" 1 1
11-12.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
9-10.2 TABLE ACCESS CLUSTER "TS$" 1 1
10-11.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
8-9.3 NESTED LOOPS 1 26
9-10.1 NESTED LOOPS 1 26
10-11.1 NESTED LOOPS 1 25
11-12.1 TABLE ACCESS FULL "SEG$" 1 24
11-12.2 TABLE ACCESS CLUSTER "USER$" 1 1
12-13.1 INDEX UNIQUE SCAN "I_USER#" NON-UNIQUE 1
10-11.2 TABLE ACCESS CLUSTER "TS$" 1 1
11-12.1 INDEX UNIQUE SCAN "I_TS#" NON-UNIQUE 1
9-10.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 1
6-7.2 FIXED TABLE FIXED INDEX "X$KTFBUE (ind:1)" 4 24
5-6.2 INDEX UNIQUE SCAN "I_FILE2" UNIQUE 2
Partition access information......................................
Parallel operations information.......................................
And
SQL> l
1 select owner, count(*)
2 from dba_tables
3 where owner in ('SYS', 'SYSTEM')
4 and LAST_ANALYZED IS not NULL
5* group by owner
SQL> /
no rows selected
SQL>
Thanks.
June 10, 2005 - 3:31 pm UTC
but -- what are the last analyzed, are the stats even close to reality.
A reader, June 11, 2005 - 7:26 am UTC
I am afraid they are not. I will do that. I am sure, that should to take us long way. Thank you.
Is it ok to dynamically change the optimizer_index_cost_adj?
Prashant, July 11, 2005 - 5:53 pm UTC
Hi Tom,
We have a DB that is used as a Data Warehouse currently. But we are planning to use it for an OLTP application as well. Do you think, if we see any performance impact, is it ok to play around with the optimizer_index_cost_adj and optimizer_index_caching values in the logon trigger for OLTP schema?
Also, at a broad level, do you forsee any serious issues with using the same DB for both DW and OLTP? I know the question is pretty dumb, let me know if I should supply more information on this.
Thanks and Regards
Prashant
July 11, 2005 - 7:04 pm UTC
I would prefer the changes to parameters like that be localized to applications rather than the entire database. If you can ID the users to applications like that, that'd be great. The more you can default, the better.
You have the tools you need to support both types of access in the same instance(s) against a single database. You have things like RAC to provide physical separation if you like/need (these are YOUR cpus, these are MY cpus). You have the resource manager. You have materialized views to populate the DW from the OLTP without leaving the database. Since the DW probably reads lots the OLTP doesn't, you have multiple buffer pools to separate them out and give them their own 'memory' regions.
RBO vs CBO
VA, August 05, 2005 - 5:50 pm UTC
An query on 8.1.7.4 using RBO has the following tkprof
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 1.30 256 910 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.07 1.30 256 910 0 14
Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 19
The exact same query with same bind variables, on a clone database but using 9.2.0.6 and CBO has
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.12 2.20 317 842 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.13 2.21 317 842 0 14
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 19
Rows Row Source Operation
------- ---------------------------------------------------
14 SORT ORDER BY (cr=842 r=317 w=0 time=2206735 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 10 0.00 0.00
db file sequential read 317 0.02 2.13
SQL*Net message from client 2 2.04 2.07
Why is the 8i/RBO twice as fast as 9i/CBO? Why are so many waits for db file sequential read and what is "SQL Net message from client"?
Both these databases are on the same SAN, same server, same instance configuration.
Thanks
August 05, 2005 - 6:31 pm UTC
it isn't, your disk waits are enourmous on the "clone". 2.13 seconds. 317 IO's
what were the waits on the 8i system?
(cpu time of 0.07 and 0.13 are too close to each other to even say they are "different")
VA, August 05, 2005 - 7:31 pm UTC
waits on the 8i system? Dont know, the 10046 level 12 doesnt seem to be putting wait information in the trace file in 8i?
Both the systems are on the same SAN volume, same server, so why would the waits be more on 9i/CBO?
Also, the waits below show 2.13 + 2.07 which is more than the total query execution time of 2.21 seconds. So obviously, the waits are not cumulative? What *is* the sqlnet message from client wait anyway?
db file sequential read 317 0.02 2.13
SQL*Net message from client 2 2.04 2.07
I see that "disk=317" and number of waits for disk seq read is also 317. Does this mean every physical IO request had to incur a wait? Why would this be?
Thanks
August 06, 2005 - 9:24 am UTC
it sure does put them into the trace file in 8i -- TKPROF does not print them, you have to look yourself.
sqlnet message from client can/should be ignored in this case. That is the amount of time you spent waiting before running the sql (user time) in this case.
Every PIO will be a wait virtually be definition -- until you get the quantum disks that know the data you want to read before you do and have already read it for you.
Physical IO incurrs machines moving things typically, a measurable amount of wait time.
If we assume the difference in response time between 8i elapse and cpu is IO wait (safe assumption), then
ops$tkyte@ORA9IR2> select (2.20-0.12)/317, (1.30-0.07)/256 from dual;
(2.20-0.12)/317 (1.30-0.07)/256
--------------- ---------------
.006561514 .004804688
it could be that your disks on your 9i system are responding on average "50% slower" -- but hundreds of IO's is too small to confirm that. Might do some beefy full scans on each system or use lower level OS tools to measure disk performance.
VA, August 06, 2005 - 9:52 am UTC
But like I said twice,
"Both the systems are on the same SAN volume, same server, so why would the waits be more on 9i/CBO?"
Both the databases are on the exact same SAN volume (500GB volume comprising dozens of 16GB disks in a RAID3 configuration)
So why would 9i be getting the waits but not 8i? They are on the same hardware, same server
Thanks
August 06, 2005 - 10:27 am UTC
8i IS GETTING WAITS -- like I said:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 1.30 256 910 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.07 1.30 256 910 0 14
^^^^ ----- ^^^^
what do you think the difference between those two are?
Now, if this simple test reproduces dozens of times and the 9i IO wait time is consistently 50% more than the 8i IO response time -- you'll need to test lots more, this query is just too small and too fast to make any conclusions from -- you'll want to look at many things
o so what, they are on the same SAN. so???? I'm using luns-1, you are using luns-2. Luns-1 is small, not setup right, different. Luns-2 is great, perfect.
o there is cableing and interfaces between san and computer, they would be different wouldn't they?
o you are using two computers (assumption), are they the same OS level, same patches (eg: does the OS show any differences IO wise if you do big DD's from disk to disk)
There are lots to look at. Unless you are running 8i and 9i against the same database files (doubt you are) - there are many differences.
VA, August 06, 2005 - 11:42 am UTC
OK granted 8i is also getting waits, but they are not as much as the 9i waits
Yes, I have run these tests many many times and whenever 9i has to do PIO its waits are longer than 8i.
The 2 databases are on the same server, same SAN, SAME LUN. It is a 500GB SAN LUN/volume mounted on /san, 8i database files are in /san/oradata/8i/... and 9i files are in /san/oradata/9i/..., thats all the difference there is. So everything else (cabling, etc) is the SAME between both the instances.
It is really puzzling. Here, let me layout the entire testing scenario for you.
This is for a Siebel 7.7 installation on a Sun Fire 880 server with 8GB of RAM and the aforementioned 500GB SAN volume. Currently, we are running 8.1.7.4 with init.ora optimizer_mode=RULE. Performance is great.
I copied all the db files over from that one directory to another. Ran the 8i to 9i upgrade using "dbua". Gathered stats using dbms_stats.gather_schema_stats('siebel',cascade=>true)
Now QA is using LoadRunner, a automated testing tool to basically click various tabs in the Siebel front-end and measure response time from the click to the time the screen is rendered. There are 2 rows of tabs on the UI, the automated tool clicks all of them in sequence.
In 8i/RBO, all the response times are great, less than a second (0.1 to 0.9 second) for any tab.
When in 9i/CBO mode, Siebel app sets
optimizer_index_cost_adj=1
optimizer_index_caching=100
optimizer_mode=choose
optimizer_max_permutations=100
In addition, they do
alter session set optimizer_goal=first_rows_10 for all the sessions.
So, one would think execution plans should be almost identical to RBO plans, right?
In fact, thats what I do see. All the execution plans I see are identical to RBO plans (heavy use of nested loops, very few hash joins, etc).
But the automated tests in 9i/CBO, show inconsistent results. I looked at the SQLs behind the screens and ran tkprofs and saw high parse times which is understandable since CBO has to parse stuff the first time and that takes time. RBO parse times are tiny because it is dumb.
So I increased the shared_pool_size to 500MB and asked QA to ignore the first couple of "runs" because the caches were getting warmed up.
So they ran the entire test suite 5 times and took the average of runs 3 thru 5. It was still consistently 1-2 seconds slower than the corresponding RBO test.
Another thing I thought of was that by the time they run thru one "set" of tests and come back to the first test in the second "set", the buffer cache is used up i.e. it has to do PIO for all the data all over again.
But my 9i buffer cache (db_cache_size) is 1500M while the 8i buffer cache is only 800M. So, if at all, 8i should be showing poor performance.
It is almost as if the 9i software's buffer cache management is somehow poorer than 8i leading to 9i doing more frequent PIO.
And as you can see above, 9i's PIO is also somehow "slower" than 8i's PIO!
Any ideas? Thanks
August 06, 2005 - 1:10 pm UTC
...
OK granted 8i is also getting waits, but they are not as much as the 9i waits
...
for two reasons:
a) it had to do less physical IO for whatever reason, run queries again and neither should do any IO, it is *really really quite small*
b) for whatever reason, the IO waits were 50% more in one than the other in average duration. Could be many reasons as stated.
how many times have you run this very very small test. run the query say 500 times.
....
So, one would think execution plans should be almost identical to RBO plans,
right?
.....
umm, no, i would not expect any such thing, not in general.
back up, test IO, try to isolate the issue, you seem pretty sure it is "io".
..
It is almost as if the 9i software's buffer cache management is somehow poorer
than 8i leading to 9i doing more frequent PIO.
....
Umm, no.
Alberto Dell'Era, August 06, 2005 - 1:34 pm UTC
Sorry to intrude, but ... why don't you simply use the usual trick of processing the 8i trace file with 9i tkprof, and correct for the different time units in 8i and 9i ?
That way, you'll see whether it's an IO issue or not since you'll see the wait events, and stop guessing ;)
VA, August 06, 2005 - 5:40 pm UTC
Yes, I have run these tests many many times and whenever 9i has to do PIO its waits are longer than 8i.
Many many times means many many times. Yes, each individual query is very fast, but each screen has 5-6 queries and it adds up.
In all cases (after the shared pool was warmed up) and CBO parse time was 0, I always see 9i/CBO doing more PIO and taking more time.
Alberto, I did try running 9i tkprof against 8i trace file and got ridiculus timing numbers, how do I adjust them?
Tom, How can I test IO using OS tools and see why 9i is taking more time?
Tom, with the init.ora parameters and alter sessions, why wouldnt you expect RBO-type (first_rows) plans in general? [Thats what I do see!] Thats what Siebel really wants, to get fast response time to get first set of rows.
Any other ideas?
Thanks
August 06, 2005 - 7:23 pm UTC
RBO is not CBO, I would not expect the plans to be "the same" at all. I would have quite the opposite expectation, that they would be different.
Do large table, full table scans. are they significantly different (when you measure TOTAL_WAIT/NUMBER_OF_WAITS)
You might consider a tar with support who will gather your relevant OS, hardware and other details to see if you are missing an OS patch/whatnot if you show the pio to always take longer.
Alberto Dell'Era, August 06, 2005 - 6:29 pm UTC
> Alberto, I did try running 9i tkprof against 8i trace file and got ridiculus
> timing numbers, how do I adjust them?
8i traces measured time in centiseconds (10^-2 secs)
9i+ traces measure time in microseconds (10^-6 secs)
So when 9i Tkprof sees 123 in the 8i trace, it thinks 123 * 10^-6 secs, but it is really 123 * 10^-2 secs.
So, just scale by 10^-2 / 10^-6 = 10,000
VA, August 06, 2005 - 8:20 pm UTC
"RBO is not CBO, I would not expect the plans to be "the same" at all. I would have quite the opposite expectation, that they would be different"
CBO with
optimizer_index_cost_adj=1
optimizer_index_caching=90
alter session set optimizer_goal=first_rows_10
generates plans that are, 99% of the time, identical to RBO plans. I dont know why you dont agree, show me a CBO plan that is different from a RBO plan given those 3 settings above.
OK I will run detailed hardware/IO traces to see why 9i is getting more IO waits.
Overall, inspite of everything being the same (hardware, OS, same LUN) and giving 9i/CBO more resources (larger shared pool, more buffer cache, more sort are size), it is still consistently 1-2 seconds slower than 8i/RBO. Also, this is after the caches have been warmed up so initial hard parse times (which are in most cases astronomical for the CBO) are factored out.
August 07, 2005 - 8:48 am UTC
because the cbo can see bitmap indexes, parallel query, index desc scans, partitions, function based indexes, <etc etc etc etc> and the RBO has like 13 things to choose from, not all things.
I would have no expectations they would be the same. That is all I'm saying.
A CBO plan different from an RBO one
Alberto Dell'Era, August 07, 2005 - 5:28 am UTC
> show me a CBO plan that is different from a RBO plan given
> those 3 settings above.
A common join between two common (and identical) tables with a PK, same data in both but one will have 10000 rows, the other 10000+1:
alter session set optimizer_index_cost_adj=1;
alter session set optimizer_index_caching=90;
alter session set optimizer_mode=first_rows_10;
alter session set optimizer_max_permutations=100;
create table t (x int constraint t_pk primary key, y char(300));
insert /*+ append */ into t (x,y)
select rownum, rownum from all_objects where rownum <= 10000;
commit;
create table u (x int constraint u_pk primary key, y char(300));
insert /*+ append */ into u (x,y)
select rownum, rownum from all_objects where rownum <= 10000+1;
commit;
exec dbms_stats.gather_table_stats (user, 't', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'u', cascade=>true);
select table_name, num_rows from user_tables where table_name in ('T','U');
TABLE_NAME NUM_ROWS
------------------------------ ----------
T 10000
U 10001
======================================================================
select /*+ va_new_jersey */ * from t,u where t.x = u.x and t.x <= 100
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 |
| 1 | NESTED LOOPS | | 10 | 6080 | 2 |
| 2 | TABLE ACCESS BY INDEX ROWID| U | 99 | 30096 | 1 |
|* 3 | INDEX RANGE SCAN | U_PK | 100 | | 2 |
| 4 | TABLE ACCESS BY INDEX ROWID| T | 1 | 304 | 1 |
|* 5 | INDEX UNIQUE SCAN | T_PK | 1 | | |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("X"<=100)
5 - access("X"="X")
filter("X"<=100)
Note: cpu costing is off
======================================================================
select /*+ rule va_new_jersey */ * from t,u where t.x = u.x and t.x <= 100
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | NESTED LOOPS | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | | | |
|* 3 | INDEX RANGE SCAN | T_PK | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| U | | | |
|* 5 | INDEX UNIQUE SCAN | U_PK | | | |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("X"<=100)
5 - access("X"="X")
Note: rule based optimization
======================================================================
The join order is reversed ;)
Notice that if we now make "u" have 10000-1 rows:
delete from u where x >= 10000;
commit;
exec dbms_stats.gather_table_stats (user, 't', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'u', cascade=>true);
select table_name, num_rows from user_tables where table_name in ('T','U');
TABLE_NAME NUM_ROWS
------------------------------ ----------
T 10000
U 9999
======================================================================
select /*+ va_new_jersey */ * from t,u where t.x = u.x and t.x <= 100
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Re
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 |
| 1 | NESTED LOOPS | | 10 | 6080 | 2 |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 99 | 30096 | 1 |
|* 3 | INDEX RANGE SCAN | T_PK | 100 | | 2 |
| 4 | TABLE ACCESS BY INDEX ROWID| U | 1 | 304 | 1 |
|* 5 | INDEX UNIQUE SCAN | U_PK | 1 | | |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("X"<=100)
5 - access("X"="X")
filter("X"<=100)
Note: cpu costing is off
======================================================================
select /*+ rule va_new_jersey */ * from t,u where t.x = u.x and t.x <= 100
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Re
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | NESTED LOOPS | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | | | |
|* 3 | INDEX RANGE SCAN | T_PK | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| U | | | |
|* 5 | INDEX UNIQUE SCAN | U_PK | | | |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("X"<=100)
5 - access("X"="X")
Note: rule based optimization
======================================================================
Back to the rule plan.
A minimal modification of the data in one of the tables makes the CBO change its plan and diverge from the RBO one.
So, maybe today the plan is the same, but tomorrow, when the data evolve, the plan may evolve accordingly (and be optimal if the optimizer_* settings match your environment and requirements).
Note: tests in 9.2.0.6, plans fetched from v$sql_plan.
Alberto
VA, August 07, 2005 - 9:27 am UTC
Alberto, thats a fascinating example. Why is the CBO plan different from the RBO plan in your example? I noticed you didnt even have histograms, so why does that one extra row in one of the tables reverse the join order?
[Also, your example is slightly contrived, no one is going to analyze the table again if just one row is added. Heck, even the MONITORING option will not mark the table as "stale" in this case. Besides, yes, the join order is reversed, but the plans are "essentially" the same. They would show the same performance characteristics, the same LIOs, the same resource usage. Your example is brilliant, but I still stand by my assertion that given those 3 settings above and no FBI, bitmapped indexes, etc in the picture, CBO plans will be "functionally" identical to RBO plans]
Tom, yes, I realize that CBO can see many more "constructs" like bitmapped index, FBIs, etc, but Siebel uses Oracle as a bit bucket, just heap tables and b*tree indexes, nothing more advanced, thats why I said that they really want RBO and stated their system requirements (init.ora, alter sessions, etc) so as to get RBO plans
See, from a marketing perspective it sounds lame to say that Siebel 7.7 doesnt support CBO even in Oracle 9iR2! In other words, they claim to support CBO only in 9iR2 but if you look at their settings, they result in RBO plans almost 99.9% of the time!
Thanks
August 07, 2005 - 9:48 am UTC
you asked for contrived. the CBO and RBO are very different, you yourself said "99%". So, you should have no belief that the settings you have make the CBO an RBO.
and that there was a one row difference isn't what was relevant, it was that there was a difference, 1 row or 10000 rows, would not matter.
Here is another contrived example (they will all be contrived by definition)
ops$tkyte@ORA9IR2> create table t1 as select 1 id1, rownum id2 from all_objects;
Table created.
ops$tkyte@ORA9IR2> create index t1_idx1 on t1(id1);
Index created.
ops$tkyte@ORA9IR2> create index t1_idx2 on t1(id2);
Index created.
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1', cascade=>true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t2 as select rownum id1, rownum id2 from all_objects;
Table created.
ops$tkyte@ORA9IR2> create index t2_idx1 on t2(id1);
Index created.
ops$tkyte@ORA9IR2> create index t2_idx2 on t2(id2);
Index created.
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T2', cascade=>true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set optimizer_index_cost_adj=1;
Session altered.
ops$tkyte@ORA9IR2> alter session set optimizer_index_caching=90;
Session altered.
ops$tkyte@ORA9IR2> alter session set optimizer_mode=first_rows_10;
Session altered.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @plan "select * from t1, t2 where t1.id1 = t2.id1 and t1.id2 = t2.id2 and t2.id1 = 1"
ops$tkyte@ORA9IR2> delete from plan_table;
7 rows deleted.
ops$tkyte@ORA9IR2> explain plan for &1;
old 1: explain plan for &1
new 1: explain plan for select * from t1, t2 where t1.id1 = t2.id1 and t1.id2 = t2.id2 and t2.id1 = 1
Explained.
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 2 |
|* 1 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 7 | 1 |
| 2 | NESTED LOOPS | | 1 | 16 | 2 |
| 3 | TABLE ACCESS BY INDEX ROWID| T2 | 1 | 9 | 1 |
|* 4 | INDEX RANGE SCAN | T2_IDX1 | 1 | | 1 |
|* 5 | INDEX RANGE SCAN | T1_IDX1 | 30652 | | |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("T1"."ID2"="T2"."ID2")
4 - access("T2"."ID1"=1)
5 - access("T1"."ID1"=1)
Note: cpu costing is off
20 rows selected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @plan "select /*+ RULE */ * from t1, t2 where t1.id1 = t2.id1 and t1.id2 = t2.id2 and t2.id1 = 1"
ops$tkyte@ORA9IR2> delete from plan_table;
6 rows deleted.
ops$tkyte@ORA9IR2> explain plan for &1;
old 1: explain plan for &1
new 1: explain plan for select /*+ RULE */ * from t1, t2 where t1.id1 = t2.id1 and t1.id2 = t2.id2 and t2.id1 = 1
Explained.
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | NESTED LOOPS | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T2 | | | |
|* 3 | INDEX RANGE SCAN | T2_IDX1 | | | |
| 4 | AND-EQUAL | | | | |
|* 5 | INDEX RANGE SCAN | T1_IDX1 | | | |
|* 6 | INDEX RANGE SCAN | T1_IDX2 | | | |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("T2"."ID1"=1)
5 - access("T1"."ID1"="T2"."ID1")
6 - access("T1"."ID2"="T2"."ID2")
Note: rule based optimization
21 rows selected.
Alberto Dell'Era, August 07, 2005 - 11:28 am UTC
Here's another, even simpler:
alter session set optimizer_index_cost_adj=1;
alter session set optimizer_index_caching=90;
alter session set optimizer_mode=first_rows_10;
alter session set optimizer_max_permutations=100;
create table t (x int constraint t_pk primary key, y char(300));
insert /*+ append */ into t (x,y)
select rownum, rownum from dual connect by level <= 100000;
commit;
exec dbms_stats.gather_table_stats (user, 't', cascade=>true);
====================================================================
select /*+ */ x from t va_new_jersey where x <= 1000
--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 |
|* 1 | INDEX RANGE SCAN | T_PK | 11 | 55 | 2 |
--------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("X"<=1000)
Note: cpu costing is off
====================================================================
select /*+ rule */ x from t va_new_jersey where x <= 1000
--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
|* 1 | INDEX RANGE SCAN | T_X_Y_IDX | | | |
--------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("X"<=1000)
Note: rule based optimization
=====================================================================
Notice how the CBO plan is far better (it scans the slim index T_PK)
than the RBO one (that scans the fat T_X_Y_IDX).
And I've used your settings - plan is different, but more efficient, so why caring about ? Uncork a bottle of champagne instead!
Pretend you don't notice the plan being more efficient, and start changing the instance settings to force the plan to be the same as the RBO one - you'll end up paying for consuming more resources ...
In my experience, the CBO plans are almost always a lot more efficient than the RBO one, so I see no point in trying to imitate the good ole RBO. We switched from the RBO to the CBO a couple of years ago, and all the plans were better or equal - only one needed fixing, but that was a very strange statement (btw in hindsight, probably by gathering system statistics, even that plan would have been ok or better).
Version 9.2.0.6, plans from v$sql_plan.
VA, August 07, 2005 - 12:43 pm UTC
[Alberto, I dont see where you created the index T_X_Y_IDX]
Tom, Alberto: I see your point. I am NOT trying to force the CBO to behave like the RBO. I am just following instructions given by Siebel to support their 7.7 software on Oracle 9i. We have been working with them for almost a year now, tweaking parameters, running traces and inspite of giving the 9i/CBO database more resources, more memory, ignoring the higher initial hard parse times, etc, etc, CBO is still consistently 1-2 seconds slower than RBO!
I am running both 8i and 9i on the same server, same SAN, same LUN to eliminate as many differences as possible. As I said, I copied the 8i database to another direcytory on the same LUN, ran dbua to upgrade it, gathered stats, etc.
Nothing I do, nothing, can get CBO performance down to RBO levels. Almost all queries CBO is 1-2 seconds slower than RBO.
What would you do if you were me?
Thanks
August 07, 2005 - 12:46 pm UTC
can you run the simple thing I asked? just create "big table", then, read it.
probably best to read it via index - to get lots of single block IO.
Show that 9ir2 IO is N% slower than 8i IO on the same machine. That will be something you can take to support to further diagnose.
Or, you'll find they are the same and we look for the underlying root cause.
VA, August 07, 2005 - 12:49 pm UTC
One more thing...in most cases I see that it is not the plan that is the issue (well, after the shared pool is warmed up that is, before that CBO parse times are order of magnitude higher than RBO)...it is the execution of the plan that is slower in 9i/CBO. For whatever reason, CBO does more PIO (inspite of having a larger buffer cache) and inspite of being on the same hardware, the PIO takes longer.
Any ideas greatly appreciated. Thanks.
August 07, 2005 - 4:56 pm UTC
is the 9i database a backup and restore of the 8i or a build from scratch (eg: does it simply mean the data is on more blocks, not clustered remotely the same as the 8i instance was).
And push comes to shove, you need/want/should test the IO in isolation, not siebel, just oracle, get something reproducible.
Alberto Dell'Era, August 07, 2005 - 2:03 pm UTC
> [Alberto, I dont see where you created the index T_X_Y_IDX]
Oops sorry :) here's the original test:
alter session set optimizer_index_cost_adj=1;
alter session set optimizer_index_caching=90;
alter session set optimizer_mode=first_rows_10;
alter session set optimizer_max_permutations=100;
create table t (x int constraint t_pk primary key, y char(300));
insert /*+ append */ into t (x,y)
select rownum, rownum from dual connect by level <= 100000;
commit;
create index t_x_y_idx on t (x,y);
exec dbms_stats.gather_table_stats (user, 't', cascade=>true);
---------------
> probably best to read it via index - to get lots of single block IO.
Tom, maybe a simpler way is setting db_file_multiblock_read_count=1 ?
drop table big_table_test;
create table big_table_test (x)
nologging
as select rpad ('x',6000,'x') -- 6000 is > my db_block_size/ 2 = 8k / 2 = 4k
from dual connect by level < 100 -- 100 blocks
/
alter session set db_file_multiblock_read_count=1;
alter session set tracefile_identifier=big_table_test;
alter session set events '10046 trace name context forever, level 12';
begin
for x in (select /*+ full (t) */ x from big_table_test t) loop
null;
end loop;
end;
/
exit;
********************************************************************************
SELECT /*+ full (t) */ X
FROM
BIG_TABLE_TEST T
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 100 0.01 0.11 99 103 0 99
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 102 0.01 0.12 99 104 0 99
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 59 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
99 TABLE ACCESS FULL BIG_TABLE_TEST (cr=103 r=99 w=0 time=117474 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 99 0.06 0.11
August 07, 2005 - 4:55 pm UTC
yes, setting to 1 would probably do it, but it would be best to simulate the massive index hitting they are doing.
VA, August 07, 2005 - 6:07 pm UTC
"is the 9i database a backup and restore of the 8i or a build from scratch (eg: does it simply mean the data is on more blocks, not clustered remotely the same as the 8i instance was)"
Not sure what you are saying here. As I said earlier, all my 8i database files are on /san/oradata/8i/*.dbf (mountpoint is /san). I copied all these to /san/oradata/9i/*.dbf. Then ran dbua to upgrade it to 9iR2 and gathered stats using dbms_stats.gather_schema_stats('siebel',cascade=>true)
Not sure what you saying about clustering or more blocks and stuff.
I will test the IO in isolation by doing some big index range scans (db file sequential read) and also some FTS (db file scattered reads) in both 8i and 9i and compare the results. [Since the underlying hardware is identical, the results should be almost identical]
Thanks
August 07, 2005 - 7:00 pm UTC
ok, so it was a backup and restored copied -- everything should be the same.
clearly (going way back to the beginngin)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 1.30 256 910 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.07 1.30 256 910 0 14
Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 19
The exact same query with same bind variables, on a clone database but using
9.2.0.6 and CBO has
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.12 2.20 317 842 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.13 2.21 317 842 0 14
These were then from two very different plans? Else I would expect the LIO's to be the same.
Can we see the details (the queries and the plans)
VA, August 07, 2005 - 7:33 pm UTC
Yes, now that I look at them closer, the 2 plans are diferent. They are dozen pages long, forgive me for thinking they "looked" the same!
You asked for the query and the plans, here goes. You have been warned. Siebel queries are ridiculously long.
Query is
SELECT
FROM
SIEBEL.S_ACT_EMP T1,
SIEBEL.S_EVT_ACT_X T2,
SIEBEL.S_EVT_MKTG T3,
SIEBEL.S_PARTY T4,
SIEBEL.S_ORG_GROUP T5,
SIEBEL.S_FN_APPR T6,
SIEBEL.S_CONTACT T7,
SIEBEL.S_ORG_GROUP T8,
SIEBEL.S_EXP_RPT T9,
SIEBEL.S_CONTACT T10,
SIEBEL.S_PARTY T11,
SIEBEL.S_PROJ T12,
SIEBEL.S_CONTACT_FNX T13,
SIEBEL.S_PROD_DEFECT T14,
SIEBEL.S_ASSET T15,
SIEBEL.S_PROJITEM T16,
SIEBEL.S_EVT_ACT_SS T17,
SIEBEL.S_CONTACT T18,
SIEBEL.S_EVT_ACT_FNX T19,
SIEBEL.S_USER T20,
SIEBEL.S_USER T21,
SIEBEL.S_ASSET T22,
SIEBEL.S_EVT_FUL_REQ T23,
SIEBEL.S_EVT_CAL T24,
SIEBEL.S_ORG_EXT T25,
SIEBEL.S_TMSHT_LINE T26,
SIEBEL.S_OPTY T27,
SIEBEL.S_EVT_ACT T28
WHERE
T19.AMS_ACT_ID = T6.ROW_ID (+) AND
T8.PAR_GROUP_OU_ID = T5.PAR_ROW_ID (+) AND
T28.LAST_UPD_BY = T18.PAR_ROW_ID (+) AND
T28.TARGET_PER_ID = T10.PAR_ROW_ID (+) AND
T28.ASSET_ID = T22.ROW_ID (+) AND
T28.X_CORR_ID = T23.ROW_ID (+) AND
T28.PR_EXP_RPT_ID = T9.ROW_ID (+) AND
T28.OPTY_ID = T27.ROW_ID (+) AND
T28.TARGET_OU_ID = T25.PAR_ROW_ID (+) AND
T28.ORG_GROUP_ID = T8.PAR_ROW_ID (+) AND
T28.SRA_DEFECT_ID = T14.ROW_ID (+) AND
T28.PROJ_ID = T12.ROW_ID (+) AND
T28.PROJ_ITEM_ID = T16.ROW_ID (+) AND
T28.PR_TMSHT_LINE_ID = T26.ROW_ID (+) AND
T28.X_SUB_ACCT_ID = T15.ROW_ID (+) AND
T28.LAST_UPD_BY = T21.PAR_ROW_ID (+) AND
T28.ROW_ID = T3.PAR_ROW_ID (+) AND
T28.ROW_ID = T19.PAR_ROW_ID (+) AND
T28.ROW_ID = T2.PAR_ROW_ID (+) AND
T28.ROW_ID = T17.PAR_ROW_ID (+) AND
T28.ROW_ID = T24.PAR_ROW_ID (+) AND
T28.OWNER_PER_ID = T1.EMP_ID (+) AND T28.ROW_ID = T1.ACTIVITY_ID (+) AND
T1.EMP_ID = T11.ROW_ID (+) AND
T1.EMP_ID = T20.PAR_ROW_ID (+) AND
T28.TARGET_PER_ID = T4.ROW_ID (+) AND
T28.TARGET_PER_ID = T7.PAR_ROW_ID (+) AND
T28.TARGET_PER_ID = T13.PAR_ROW_ID (+) AND
((T28.APPT_REPT_REPL_CD IS NULL) AND
((T28.TEMPLATE_FLG != 'Y' AND T28.TEMPLATE_FLG != 'P' OR T28.TEMPLATE_FLG IS NULL) AND (T28.PRIV_FLG = 'N' OR T28.ROW_ID IN (
SELECT SQ1_T1.ACTIVITY_ID
FROM SIEBEL.S_ACT_EMP SQ1_T1, SIEBEL.S_PARTY SQ1_T2, SIEBEL.S_CONTACT SQ1_T3
WHERE ( SQ1_T2.ROW_ID = SQ1_T3.PAR_ROW_ID AND SQ1_T1.EMP_ID = SQ1_T2.ROW_ID)
AND (SQ1_T3.EMP_FLG = 'Y')
AND (SQ1_T2.ROW_ID = :t2_row_id))) AND T28.CREDIT_ACT_CD IS NULL ) AND
(T28.PRIV_FLG = :t28_priv_flg OR T28.PRIV_FLG IS NULL OR T28.OWNER_PER_ID = :t28_owner_per_id)) AND
(T28.ORG_GROUP_ID = :t28_org_group_id)
ORDER BY
T1.ROW_STATUS DESC, T28.TODO_PLAN_START_DT DESC
8i RBO tkprof (raw trace file with 10046 level 12 processed with 9i tkprof binary) is
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 1.30 256 910 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.07 1.30 256 910 0 14
Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 19
Rows Row Source Operation
------- ---------------------------------------------------
14 SORT ORDER BY
14 FILTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 NESTED LOOPS OUTER
15 TABLE ACCESS BY INDEX ROWID S_EVT_ACT
194 INDEX RANGE SCAN (object id 140543)
0 TABLE ACCESS BY INDEX ROWID S_OPTY
14 INDEX UNIQUE SCAN (object id 114609)
0 TABLE ACCESS BY INDEX ROWID S_TMSHT_LINE
14 INDEX UNIQUE SCAN (object id 117944)
0 TABLE ACCESS BY INDEX ROWID S_ORG_EXT
14 INDEX UNIQUE SCAN (object id 115263)
0 TABLE ACCESS BY INDEX ROWID S_EVT_FUL_REQ
14 INDEX UNIQUE SCAN (object id 112722)
7 TABLE ACCESS BY INDEX ROWID S_ASSET
21 INDEX UNIQUE SCAN (object id 109781)
14 TABLE ACCESS BY INDEX ROWID S_USER
28 INDEX UNIQUE SCAN (object id 118063)
14 TABLE ACCESS BY INDEX ROWID S_CONTACT
28 INDEX UNIQUE SCAN (object id 110820)
0 TABLE ACCESS BY INDEX ROWID S_PROJITEM
14 INDEX UNIQUE SCAN (object id 116507)
0 TABLE ACCESS BY INDEX ROWID S_ASSET
14 INDEX UNIQUE SCAN (object id 109781)
0 TABLE ACCESS BY INDEX ROWID S_PROD_DEFECT
14 INDEX UNIQUE SCAN (object id 116073)
0 TABLE ACCESS BY INDEX ROWID S_PROJ
14 INDEX UNIQUE SCAN (object id 116461)
7 TABLE ACCESS BY INDEX ROWID S_CONTACT
21 INDEX UNIQUE SCAN (object id 110820)
0 TABLE ACCESS BY INDEX ROWID S_EXP_RPT
14 INDEX UNIQUE SCAN (object id 112834)
14 TABLE ACCESS BY INDEX ROWID S_ORG_GROUP
28 INDEX UNIQUE SCAN (object id 115379)
7 TABLE ACCESS BY INDEX ROWID S_CONTACT
21 INDEX UNIQUE SCAN (object id 110820)
14 TABLE ACCESS BY INDEX ROWID S_ORG_GROUP
28 INDEX UNIQUE SCAN (object id 115379)
7 INDEX UNIQUE SCAN (object id 115537)
14 TABLE ACCESS BY INDEX ROWID S_ACT_EMP
29 INDEX RANGE SCAN (object id 109299)
14 TABLE ACCESS BY INDEX ROWID S_USER
28 INDEX UNIQUE SCAN (object id 118063)
14 INDEX UNIQUE SCAN (object id 115537)
0 TABLE ACCESS BY INDEX ROWID S_EVT_CAL
14 INDEX RANGE SCAN (object id 159125)
14 TABLE ACCESS BY INDEX ROWID S_EVT_ACT_FNX
28 INDEX RANGE SCAN (object id 112704)
0 TABLE ACCESS BY INDEX ROWID S_FN_APPR
14 INDEX UNIQUE SCAN (object id 113028)
0 TABLE ACCESS BY INDEX ROWID S_EVT_ACT_SS
14 INDEX RANGE SCAN (object id 112708)
7 TABLE ACCESS BY INDEX ROWID S_CONTACT_FNX
21 INDEX RANGE SCAN (object id 110976)
0 TABLE ACCESS BY INDEX ROWID S_EVT_MKTG
14 INDEX RANGE SCAN (object id 159127)
14 TABLE ACCESS BY INDEX ROWID S_EVT_ACT_X
28 INDEX RANGE SCAN (object id 112709)
0 NESTED LOOPS
0 NESTED LOOPS
0 INDEX UNIQUE SCAN (object id 115537)
0 TABLE ACCESS BY INDEX ROWID S_CONTACT
0 INDEX UNIQUE SCAN (object id 110820)
0 TABLE ACCESS BY INDEX ROWID S_ACT_EMP
0 INDEX RANGE SCAN (object id 109299)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 9 0.00 0.00
file open 13 0.00 0.00
db file sequential read 256 0.03 1.27
SQL*Net message from client 2 2.15 2.18
9i CBO tkprof is
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.12 2.20 317 842 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.13 2.21 317 842 0 14
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 19
Rows Row Source Operation
------- ---------------------------------------------------
14 SORT ORDER BY (cr=842 r=317 w=0 time=2206735 us)
14 FILTER (cr=842 r=317 w=0 time=2204644 us)
14 NESTED LOOPS OUTER (cr=842 r=317 w=0 time=2204542 us)
14 MERGE JOIN OUTER (cr=812 r=317 w=0 time=2203943 us)
14 NESTED LOOPS OUTER (cr=809 r=317 w=0 time=2203274 us)
14 NESTED LOOPS OUTER (cr=780 r=317 w=0 time=2202741 us)
14 NESTED LOOPS OUTER (cr=751 r=316 w=0 time=2196242 us)
14 NESTED LOOPS OUTER (cr=719 r=316 w=0 time=2195316 us)
14 NESTED LOOPS OUTER (cr=675 r=291 w=0 time=1962630 us)
14 NESTED LOOPS OUTER (cr=659 r=291 w=0 time=1962186 us)
14 NESTED LOOPS OUTER (cr=615 r=291 w=0 time=1961266 us)
14 NESTED LOOPS OUTER (cr=570 r=269 w=0 time=1774362 us)
14 NESTED LOOPS OUTER (cr=570 r=269 w=0 time=1774155 us)
14 NESTED LOOPS OUTER (cr=538 r=251 w=0 time=1644804 us)
14 NESTED LOOPS OUTER (cr=522 r=251 w=0 time=1644405 us)
14 NESTED LOOPS OUTER (cr=493 r=250 w=0 time=1635631 us)
14 NESTED LOOPS OUTER (cr=464 r=250 w=0 time=1635003 us)
14 NESTED LOOPS OUTER (cr=464 r=250 w=0 time=1634832 us)
14 NESTED LOOPS OUTER (cr=425 r=250 w=0 time=1633973 us)
14 NESTED LOOPS OUTER (cr=425 r=250 w=0 time=1633812 us)
14 NESTED LOOPS OUTER (cr=409 r=250 w=0 time=1633230 us)
14 NESTED LOOPS OUTER (cr=393 r=250 w=0 time=1632574 us)
14 NESTED LOOPS OUTER (cr=393 r=250 w=0 time=1632403 us)
14 NESTED LOOPS OUTER (cr=393 r=250 w=0 time=1632241 us)
14 NESTED LOOPS OUTER (cr=391 r=250 w=0 time=1631704 us)
14 NESTED LOOPS OUTER (cr=391 r=250 w=0 time=1631549 us)
14 NESTED LOOPS OUTER (cr=391 r=250 w=0 time=1631407 us)
14 NESTED LOOPS OUTER (cr=391 r=250 w=0 time=1631246 us)
14 NESTED LOOPS OUTER (cr=391 r=250 w=0 time=1631066 us)
14 TABLE ACCESS BY INDEX ROWID OBJ#(11822) (cr=391 r=250 w=0 time=1630801 us)
193 INDEX RANGE SCAN OBJ#(140543) (cr=3 r=1 w=0 time=13064 us)(object id 140543)
0 TABLE ACCESS BY INDEX ROWID OBJ#(14224) (cr=0 r=0 w=0 time=52 us)
0 INDEX UNIQUE SCAN OBJ#(114609) (cr=0 r=0 w=0 time=11 us)(object id 114609)
0 TABLE ACCESS BY INDEX ROWID OBJ#(18219) (cr=0 r=0 w=0 time=36 us)
0 INDEX UNIQUE SCAN OBJ#(117944) (cr=0 r=0 w=0 time=4 us)(object id 117944)
0 TABLE ACCESS BY INDEX ROWID OBJ#(16471) (cr=0 r=0 w=0 time=26 us)
0 INDEX UNIQUE SCAN OBJ#(116507) (cr=0 r=0 w=0 time=6 us)(object id 116507)
0 TABLE ACCESS BY INDEX ROWID OBJ#(15960) (cr=0 r=0 w=0 time=24 us)
0 INDEX UNIQUE SCAN OBJ#(116073) (cr=0 r=0 w=0 time=4 us)(object id 116073)
0 TABLE ACCESS BY INDEX ROWID OBJ#(12036) (cr=0 r=0 w=0 time=30 us)
0 INDEX UNIQUE SCAN OBJ#(112834) (cr=0 r=0 w=0 time=8 us)(object id 112834)
0 TABLE ACCESS BY INDEX ROWID OBJ#(11891) (cr=2 r=0 w=0 time=221 us)
0 INDEX RANGE SCAN OBJ#(112708) (cr=2 r=0 w=0 time=197 us)(object id 112708)
0 TABLE ACCESS BY INDEX ROWID OBJ#(14980) (cr=0 r=0 w=0 time=26 us)
0 INDEX UNIQUE SCAN OBJ#(115263) (cr=0 r=0 w=0 time=4 us)(object id 115263)
0 TABLE ACCESS BY INDEX ROWID OBJ#(16407) (cr=0 r=0 w=0 time=27 us)
0 INDEX UNIQUE SCAN OBJ#(116461) (cr=0 r=0 w=0 time=6 us)(object id 116461)
0 TABLE ACCESS BY INDEX ROWID OBJ#(150888) (cr=16 r=0 w=0 time=417 us)
0 INDEX RANGE SCAN OBJ#(159125) (cr=16 r=0 w=0 time=391 us)(object id 159125)
0 TABLE ACCESS BY INDEX ROWID OBJ#(150889) (cr=16 r=0 w=0 time=286 us)
0 INDEX RANGE SCAN OBJ#(159127) (cr=16 r=0 w=0 time=257 us)(object id 159127)
0 TABLE ACCESS BY INDEX ROWID OBJ#(11902) (cr=0 r=0 w=0 time=28 us)
0 INDEX UNIQUE SCAN OBJ#(112722) (cr=0 r=0 w=0 time=4 us)(object id 112722)
14 TABLE ACCESS BY INDEX ROWID OBJ#(18380) (cr=39 r=0 w=0 time=691 us)
14 INDEX UNIQUE SCAN OBJ#(118063) (cr=16 r=0 w=0 time=269 us)(object id 118063)
0 TABLE ACCESS BY INDEX ROWID OBJ#(8325) (cr=0 r=0 w=0 time=30 us)
0 INDEX UNIQUE SCAN OBJ#(109781) (cr=0 r=0 w=0 time=4 us)(object id 109781)
7 TABLE ACCESS BY INDEX ROWID OBJ#(8325) (cr=29 r=0 w=0 time=467 us)
7 INDEX UNIQUE SCAN OBJ#(109781) (cr=8 r=0 w=0 time=155 us)(object id 109781)
7 TABLE ACCESS BY INDEX ROWID OBJ#(9764) (cr=29 r=1 w=0 time=8589 us)
7 INDEX RANGE SCAN OBJ#(110976) (cr=16 r=0 w=0 time=368 us)(object id 110976)
7 INDEX UNIQUE SCAN OBJ#(115537) (cr=16 r=0 w=0 time=263 us)(object id 115537)
14 TABLE ACCESS BY INDEX ROWID OBJ#(11888) (cr=32 r=18 w=0 time=129132 us)
14 INDEX RANGE SCAN OBJ#(112704) (cr=18 r=7 w=0 time=46747 us)(object id 112704)
0 TABLE ACCESS BY INDEX ROWID OBJ#(12292) (cr=0 r=0 w=0 time=48 us)
0 INDEX UNIQUE SCAN OBJ#(113028) (cr=0 r=0 w=0 time=9 us)(object id 113028)
14 TABLE ACCESS BY INDEX ROWID OBJ#(7731) (cr=45 r=22 w=0 time=186631 us)
14 INDEX RANGE SCAN OBJ#(109302) (cr=30 r=9 w=0 time=76389 us)(object id 109302)
14 TABLE ACCESS BY INDEX ROWID OBJ#(18380) (cr=44 r=0 w=0 time=722 us)
14 INDEX UNIQUE SCAN OBJ#(118063) (cr=16 r=0 w=0 time=268 us)(object id 118063)
14 INDEX UNIQUE SCAN OBJ#(115537) (cr=16 r=0 w=0 time=296 us)(object id 115537)
14 TABLE ACCESS BY INDEX ROWID OBJ#(11895) (cr=44 r=25 w=0 time=232439 us)
14 INDEX RANGE SCAN OBJ#(112709) (cr=30 r=11 w=0 time=93266 us)(object id 112709)
14 TABLE ACCESS BY INDEX ROWID OBJ#(9603) (cr=32 r=0 w=0 time=697 us)
14 INDEX UNIQUE SCAN OBJ#(110820) (cr=16 r=0 w=0 time=377 us)(object id 110820)
7 TABLE ACCESS BY INDEX ROWID OBJ#(9603) (cr=29 r=1 w=0 time=6336 us)
7 INDEX UNIQUE SCAN OBJ#(110820) (cr=16 r=0 w=0 time=201 us)(object id 110820)
7 TABLE ACCESS BY INDEX ROWID OBJ#(9603) (cr=29 r=0 w=0 time=343 us)
7 INDEX UNIQUE SCAN OBJ#(110820) (cr=16 r=0 w=0 time=139 us)(object id 110820)
14 BUFFER SORT (cr=3 r=0 w=0 time=329 us)
1 TABLE ACCESS BY INDEX ROWID OBJ#(15114) (cr=3 r=0 w=0 time=56 us)
1 INDEX UNIQUE SCAN OBJ#(115379) (cr=2 r=0 w=0 time=32 us)(object id 115379)
14 TABLE ACCESS BY INDEX ROWID OBJ#(15114) (cr=30 r=0 w=0 time=404 us)
14 INDEX UNIQUE SCAN OBJ#(115379) (cr=16 r=0 w=0 time=204 us)(object id 115379)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 NESTED LOOPS (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID OBJ#(9603) (cr=0 r=0 w=0 time=0 us)
0 INDEX UNIQUE SCAN OBJ#(110820) (cr=0 r=0 w=0 time=0 us)(object id 110820)
0 INDEX UNIQUE SCAN OBJ#(115537) (cr=0 r=0 w=0 time=0 us)(object id 115537)
0 INDEX RANGE SCAN OBJ#(109302) (cr=0 r=0 w=0 time=0 us)(object id 109302)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 10 0.00 0.00
db file sequential read 317 0.02 2.13
SQL*Net message from client 2 2.04 2.07
Thanks
August 08, 2005 - 7:32 am UTC
I'd do your benchmark and see what it says.
IO
A reader, August 07, 2005 - 9:11 pm UTC
I did some db file sequential read IO tests on 8i and 9i and my initial suspicion seems to be right, 9i takes longer for the same amount of work.
8i tkprof
********************************************************************************
select /*+ index(a S_WF_PROP_VAL_F1) */ * from S_WF_PROP_VAL a
where rownum<=200000
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 200001 28.79 33.45 20549 414159 0 200000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 200003 28.79 33.45 20549 414159 0 200000
Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 19
Rows Row Source Operation
------- ---------------------------------------------------
200000 COUNT STOPKEY
200000 TABLE ACCESS BY INDEX ROWID S_WF_PROP_VAL
200000 INDEX FULL SCAN (object id 118222)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 200004 0.01 0.26
SQL*Net message from client 200004 0.10 304.33
SQL*Net more data to client 14401 0.01 0.75
db file sequential read 20549 0.03 8.11
********************************************************************************
9i tkprof
********************************************************************************
select /*+ index(a S_WF_PROP_VAL_F1) */ * from S_WF_PROP_VAL a
where rownum<=200000
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 200001 29.29 59.42 12679 414160 0 200000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 200003 29.29 59.43 12679 414160 0 200000
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 19
Rows Row Source Operation
------- ---------------------------------------------------
200000 COUNT STOPKEY (cr=400127 r=12679 w=0 time=42837284 us)
200000 TABLE ACCESS BY INDEX ROWID OBJ#(18589) (cr=400127 r=12679 w=0 time=42318191 us)
200000 INDEX FULL SCAN OBJ#(118222) (cr=200003 r=136 w=0 time=3607545 us)(object id 118222)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 200002 0.00 0.26
SQL*Net message from client 200002 0.21 275.64
SQL*Net more data to client 15301 0.00 0.90
db file sequential read 12679 0.03 33.63
********************************************************************************
See the waits for db file sequential read. Even though 9i has almost half the waits (12K) as 8i (20K), its total wait time (33.63 sec) is almost 4 times the 8i total wait time (8.11 sec)!!
Any ideas?
[Also, note that these are both generated using the 9i tkprof, I didnt have to adjust the time units for the 8i trace file as Alberto mentioned, wonder why?]
Thanks
August 08, 2005 - 7:33 am UTC
contact support, you got the example they'll need right now. don't even mention siebel, just use this
Alberto Dell'Era, August 08, 2005 - 6:23 am UTC
>[Also, note that these are both generated using the 9i tkprof, I didnt have to
>adjust the time units for the 8i trace file as Alberto mentioned, wonder why?]
Oh! It seems that in later releases, 9i tkprof automatically detects the db version of the trace file, and adjust its output accordingly.
I've run a quick test - run 9.2.0.6 Tkprof against a 9.2.0.6 trace file, then edited it by changing every occurrence of "9.2." to "8.1.", and "9i" to "8i", and run 9.2.0.6 tkprof against the edited (fake 8i) file. Results:
Original:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 99 0.06 0.11
Edited:
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 99 600.60 1143.33
So it automatically adjust by 10,000.
(It is also a nice trick to get a finer timings resolution when needed ... ;)
TKProf from above
A reader, August 08, 2005 - 9:38 am UTC
After looking at the results from above and there are a couple oddities.
1) For both 8i and 9i, the TKProf report is showing 200,000+ SQL*Net messages. Since its returning 200,000 records, does that mean 1 record is being fetched at a time?
2) Has contention been eliminated as a cause? Perhaps a statspack snapshot before and after the test was run on both instances may give a better whole picture
3) Is the TKProf result repeatable? Instead of fetching 200,000 rows just once, execute and fetch from the statement 10 times. That could help determine if this is more of a problem or anomaly
OS file optimization ?
Gary, August 08, 2005 - 7:12 pm UTC
I may be out of my depth here, but might this be something as simple as file caching or similar by the OS.
I'm wondering that, since the poster is comparing a Siebel app running against 9i and 8i, perhaps the 9i is a test environment and the 8i db is production and in use by lots of people. If so, then the data files being accessed by the 8i db may be getting optimized ahead of those accessed by the 9i db.
August 08, 2005 - 8:42 pm UTC
he says
- same machine
- same mount points
- load driven by loadrunner
those were my first thoughts as well.
To Gary
VA, August 08, 2005 - 9:32 pm UTC
> Since its returning 200,000 records, does that mean 1
> record is being fetched at a time?
Maybe, I am setting termout off and/or set autotrace traceonly. But the records are pretty wide, so I wouldnt be surprised if it is getting 1 per fetch.
> statspack
statspack shows nothing out of the ordinary.
> execute it 10 times
Well, if I execute it again, all the PIOs go away and so the db file sequential read waits go away so its much faster.
> Gary:...
You are right. The 8i is production, the 9i is test. Production is used by many people. But QA starts the load runner tests on both instances after business hours.
What do you mean by "...the data files being accessed by the 8i db may be getting optimized ahead of those accessed by the 9i db..."?
Any optimizations being done by the OS would benefit both instance equally, would they not? Also, the 9i buffer cache is twice as large as the 8i one so if anything, 9i should do lesser PIO.
I have opened a TAR with Support and am working on it with them.
Thanks
August 09, 2005 - 9:19 am UTC
...
Any optimizations being done by the OS would benefit both instance equally
.....
no, they would not, caches cache frequently accessed data. if you are in fact using the 8i all day, it is "more frequent"
do you have a test system that you can use in isolation, away from the noise of the day.
are there long columns involved in this (LONG types)? Must be, SQLPlus drops to a row at a time during this processing.
Also, if the 8i instance is in fact "live" and modified all day long -- you cannot any longer compare PIO between the two. These databases have diverged, they are NOT the same anymore.
Alberto Dell'Era, August 09, 2005 - 6:31 am UTC
> Any optimizations being done by the OS would benefit both instance equally,
would they not?
Not necessarily.
The activity performed during business hours on the 8i instance may have cached most of the blocks in the filesystem OS cache, and that would give an unfair advantage to the 8i benchmark.
NB The filesystem OS cache is for sure kicking in in these benchmarks - look at the avg waits:
8i: 8.11 secs / 20549 = .39 msec
9i: 33.63 secs / 12679 = 2.65 msec
too small to be all PIO-from-the-disk-surface in general (unless you run solid-state disks of course).
> Also, the 9i buffer cache is twice as large as the 8i one so if
anything, 9i should do lesser PIO.
Not necessarily, for the same reason. All those index-happy plans may have cached most of the indexes in the buffer cache due to business-hours activity on the 8i instance, again giving 8i an unfair advantage.
August 09, 2005 - 10:14 am UTC
yes, I thought we were looking at apples to apples here, but we are not.
Index Fast Full Scan not being used
Narendra, August 20, 2008 - 10:32 am UTC
Tom,
I am expecting the CBO to use a Index Fast Full scan but getting a full table scan, instead. Can you please help
me in understanding this behaviour ?
SQL> select * from v$version ;
BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE 8.1.7.0.0 Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
SQL> create table big_table as select * from all_objects ;
Table created.
SQL> desc big_table ;
SQL> desc big_table ;
Name Null? Type
------------------------------------------------- -------- -----------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED NOT NULL DATE
LAST_DDL_TIME NOT NULL DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
SQL> create index objectid_idx on big_table(object_id) ;
Index created.
SQL> exec dbms_stats.gather_table_stats(user, 'BIG_TABLE', cascade=>true) ;
PL/SQL procedure successfully completed.
SQL> set autotrace on explain
SQL> select count(*) from big_table ;
COUNT(*)
----------
110844
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=224 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=224 Card=110844
)
SQL> spool off
p.s. I know I am talking about "obsolete", "desupported" software here, but great if you can throw some light.
August 20, 2008 - 10:51 am UTC
something is rotten....
card=110844, that is not the result of a create table as select from all objects in 8i - it would be more like 20,000-40,000
ops$tkyte@ORA817DEV> create table big_table as select * from all_objects ;
Table created.
ops$tkyte@ORA817DEV> desc big_table ;
Name Null? Type
---------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED NOT NULL DATE
LAST_DDL_TIME NOT NULL DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
ops$tkyte@ORA817DEV> create index objectid_idx on big_table(object_id) ;
Index created.
ops$tkyte@ORA817DEV> exec dbms_stats.gather_table_stats(user, 'BIG_TABLE', cascade=>true) ;
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> set autotrace on explain
ops$tkyte@ORA817DEV> select count(*) from big_table ;
COUNT(*)
----------
25741
Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1)
1 0
SORT (AGGREGATE)
2 1
INDEX (FAST FULL SCAN) OF 'OBJECTID_IDX' (NON-UNIQUE) (Cost=2 Card=25741)
so, what did you really do here?
it was NOT a cut and paste
SQL> desc big_table ;
SQL> desc big_table ;
Addl. Info - Index Fast Full Scan not being used
Narendra, August 20, 2008 - 10:40 am UTC
Tom,
Some additional details on follow-up above that might help.
SQL> select /*+ INDEX_FFS(big_table objectid_idx) */ count(*) from big_table ;
COUNT(*)
----------
110844
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=38 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'OBJECTID_IDX' (NON-UNIQUE) (Cost=38 Card=110844)
When I explicitly hinted the SQL, CBO used the fast full scan. In fact, CBO shows fast full scan cost as 38 and full table scan cost as 224 (as mentioned in earlier follow-up). Isn't that good enough reason for CBO to choose index fast full scan ?
August 20, 2008 - 10:52 am UTC
you need to do what I did, a pure cut and paste - unless you tell me your all objects really has 110,844 rows.
and explain the double 'desc' without output both times....
Index Fast Full Scan
Narendra, August 21, 2008 - 5:14 am UTC
Tom,
The only editing that I had done earlier was to adjust the output of DESC command. The reason for having so many records in ALL_OBJECTS is probably because I am using a DEV environment that contains objects for Oracle Applications as well.
Anyways, here is complete cut-and-paste (without editing):
SQL> select * from v$version ;
BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE 8.1.7.0.0 Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
SQL> drop table big_table ;
Table dropped.
SQL> select count(*) from all_objects ;
COUNT(*)
----------
110843
SQL> create table big_table as select * from all_objects ;
Table created.
SQL> desc big_table ;
Name Null? Type
---------------------------------------------- -------- ------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED NOT NULL DATE
LAST_DDL_TIME NOT NULL DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
SQL> create index objectid_idx on big_table(object_id) ;
Index created.
SQL> exec dbms_stats.gather_table_stats(user, 'BIG_TABLE', cascade=>true) ;
PL/SQL procedure successfully completed.
SQL> set autotrace on explain
SQL> select count(*) from big_table ;
COUNT(*)
----------
110844
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=224 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=224 Card=110844
)
SQL> select /*+ INDEX_FFS(big_table objectid_idx) */ count(*) from big_table ;
COUNT(*)
----------
110844
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=38 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'OBJECTID_IDX' (NON-UNIQUE) (C
ost=38 Card=110844)
SQL> spool off
Can you please help me in understanding this ?
August 21, 2008 - 8:37 am UTC
what non-default parameters do you have in place.
Index Fast Full Scan
Narendra, August 21, 2008 - 9:47 am UTC
Tom,
I don't think there are any non-default parameters set.
Can you please let me know which ones are you interested in ?
August 21, 2008 - 9:23 pm UTC
I'm interested in all of the NON DEFAULT one you have set - meaning, all of them you are setting (if you set them, they must not be defaulting)
Index Fast Full Scan
Narendra, August 22, 2008 - 5:11 am UTC
Tom,
As I am not managing this environment, I am not sure which parameters are set to non-default values.
However, here is an output of SHOW PARAMETER.
SQL> show parameter
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
O7_DICTIONARY_ACCESSIBILITY boolean TRUE
_complex_view_merging boolean TRUE
_fast_full_scan_enabled boolean FALSE
_like_with_bind_as_equality boolean TRUE
_optimizer_mode_force boolean TRUE
_optimizer_undo_changes boolean FALSE
_or_expand_nvl_predicate boolean TRUE
_ordered_nested_loop boolean TRUE
_push_join_predicate boolean TRUE
_push_join_union_view boolean TRUE
_sort_elimination_cost_ratio integer 5
_sqlexec_progression_cost integer 0
_table_scan_cost_plus_one boolean TRUE
_trace_files_public boolean TRUE
_use_column_stats_for_function boolean TRUE
active_instance_count integer
always_anti_join string NESTED_LOOPS
always_semi_join string STANDARD
aq_tm_processes integer 2
audit_file_dest string ?/rdbms/audit
audit_trail string NONE
background_core_dump string partial
background_dump_dest string /ora/bt31/BASE/oracle/bt31db/8
.1.6/admin/bdump
backup_tape_io_slaves boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
commit_point_strength integer 1
compatible string 8.1.7
control_file_record_keep_time integer 7
control_files string /ora/bt31/BASE/oracle/bt31data
/cntrl01.dbf, /ora/bt31/BASE/o
racle/bt31data/cntrl02.dbf, /o
ra/bt31/BASE/oracle/bt31data/c
ntrl03.dbf
core_dump_dest string /ora/bt31/BASE/oracle/bt31db/8
.1.6/admin/cdump
cpu_count integer 14
create_bitmap_area_size integer 8388608
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
db_block_buffers integer 5000
db_block_checking boolean FALSE
db_block_checksum boolean FALSE
db_block_lru_latches integer 7
db_block_max_dirty_target integer 5000
db_block_size integer 8192
db_domain string
db_file_direct_io_count integer 64
db_file_multiblock_read_count integer 8
db_file_name_convert string
db_files integer 500
db_name string BT31
db_writer_processes integer 1
dblink_encrypt_login boolean FALSE
dbwr_io_slaves integer 0
disk_asynch_io boolean TRUE
distributed_transactions integer 31
dml_locks integer 500
enqueue_resources integer 5000
event string
fast_start_io_target integer 5000
fast_start_parallel_rollback string LOW
fixed_date string
gc_defer_time integer 10
gc_files_to_locks string
gc_releasable_locks integer 0
gc_rollback_locks string 0-1024=32!8REACH
global_names boolean FALSE
hash_area_size integer 512000
hash_join_enabled boolean TRUE
hash_multiblock_io_count integer 0
hi_shared_memory_address integer 0
hpux_sched_noage integer
hs_autoregister boolean TRUE
ifile file ?/dbs/ifilecbo.ora
instance_groups string
instance_name string BT31
instance_number integer 0
java_max_sessionspace_size integer 0
java_pool_size string 20000K
java_soft_sessionspace_limit integer 0
job_queue_interval integer 60
job_queue_processes integer 0
large_pool_size string 0
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
lm_locks integer 12000
lm_ress integer 6000
local_listener string
lock_name_space string
lock_sga boolean FALSE
log_archive_dest string
log_archive_dest_1 string
log_archive_dest_2 string
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_state_1 string enable
log_archive_dest_state_2 string enable
log_archive_dest_state_3 string enable
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s.dbf
log_archive_max_processes integer 1
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 1048576
log_checkpoint_interval integer 100000
log_checkpoint_timeout integer 72000
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
max_commit_propagation_delay integer 700
max_dump_file_size string 10240
max_enabled_roles integer 40
max_rollback_segments integer 30
mts_circuits integer 0
mts_dispatchers string
mts_listener_address string
mts_max_dispatchers integer 5
mts_max_servers integer 20
mts_multiple_listeners boolean FALSE
mts_servers integer 0
mts_service string BT31
mts_sessions integer 0
nls_calendar string
nls_comp string
nls_currency string
nls_date_format string DD-MON-RR
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string american
nls_numeric_characters string .,
nls_sort string binary
nls_territory string america
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
open_cursors integer 600
open_links integer 4
open_links_per_instance integer 4
ops_interconnects string
optimizer_features_enable string 8.1.7
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 79000
optimizer_mode string CHOOSE
optimizer_percent_parallel integer 0
oracle_trace_collection_name string
oracle_trace_collection_path string ?/otrace/admin/cdf
oracle_trace_collection_size integer 5242880
oracle_trace_enable boolean FALSE
oracle_trace_facility_name string oracled
oracle_trace_facility_path string ?/otrace/admin/fdf
os_authent_prefix string ops$
os_roles boolean FALSE
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_broadcast_enabled boolean FALSE
parallel_execution_message_size integer 2148
parallel_instance_group string
parallel_max_servers integer 8
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
partition_view_enabled boolean FALSE
plsql_v2_compatibility boolean FALSE
pre_page_sga boolean FALSE
processes integer 100
query_rewrite_enabled boolean TRUE
query_rewrite_integrity string enforced
rdbms_server_dn string
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
remote_dependencies_mode string TIMESTAMP
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
replication_dependency_tracking boolean TRUE
resource_limit boolean FALSE
resource_manager_plan string
rollback_segments string rbs01, rbs02, rbs03, rbs04, rb
s05, rbs06, rbs07
row_locking string always
serial_reuse string DISABLE
serializable boolean FALSE
service_names string BT31
session_cached_cursors integer 0
session_max_open_files integer 10
sessions integer 115
shadow_core_dump string partial
shared_memory_address integer 0
shared_pool_reserved_size string 30000000
shared_pool_size string 300000000
sort_area_retained_size integer 0
sort_area_size integer 256000
sort_multiblock_read_count integer 2
sql92_security boolean FALSE
sql_trace boolean FALSE
sql_version string NATIVE
standby_archive_dest string ?/dbs/arch
star_transformation_enabled string FALSE
tape_asynch_io boolean TRUE
text_enable boolean FALSE
thread integer 0
timed_os_statistics integer 0
timed_statistics boolean TRUE
tracefile_identifier string
transaction_auditing boolean TRUE
transactions integer 126
transactions_per_rollback_segment integer 5
use_indirect_data_buffers boolean FALSE
user_dump_dest string /ora/bt31/BASE/oracle/bt31db/8
.1.6/admin/udump
utl_file_dir string *
SQL> spool off
Hope this helps.
August 22, 2008 - 9:51 am UTC
ok, that you have _ parameters in there means you've set lots of undocumented parameters.
You are getting what you asked for - the unanswerable - all of those _ parameters do things you and I are not sure about (we don't know, they are undocumented)
you have a ton of optimizer parameters set, unset them and you'll get "normal" behavior.
do you understand that by having utl_file_dir set to *, anyone can destroy your database and oracle install - anyone..... ouch that hurts.
You need to have a meeting immediately, with the init.ora printed out - and have someone justify each and every setting, this is not good.
Index Fast Full Scan
Narendra, August 22, 2008 - 11:02 am UTC
Tom,
Thanks for your advice.
I was not quite sure whether SHOW PARAMETER always displays only modified _ parameters or all _ parameters.
If the strange behaviour is due to those undocumented parameters, then that explains all.
About UTL_FILE_DIR value, no comments as the environment is managed by DBAs and am not quite interested in getting involved in company politics.
But isn't the default value for _is_fast_full_scan_enabled FALSE, by default in 8i ?
I could not find any reference about how I can find out the default values for these undocumented parameters, just to confirm your claim that undocumented parameters have been set because they appear in SHOW PARAMETER output. Or should I consider it as a FACT because you say so ?
:)
August 26, 2008 - 7:22 pm UTC
only modified ones...
You don't need to get involved in politics, just send them a polite email with the body "hey guys, what happens if I issue utl_file.fopen( '/home/oracle/oradata/yourside/system.dbf', 'w' ) - will that be ok? I think it will erase that file, utl_file should not be *
And if they ignore you, forward to their manager later.
I'm not going to comment on what the optimizer is doing here at all, you have SO MANY parameters set, it makes my head spin. It is not reasonable to even try to diagnose, you get what you ask for.
You should consider that in all other outputs you see pretty much all over the world - "_" parameters do not show up.
Unless they are set.
Security risk
Govind, August 22, 2008 - 8:05 pm UTC
Tom,
I like your comment about utl_file_dir value of * and risks associated with it, gave me a new perspective.
However, should it not be treated as a security whole in Oracle? Why is Oracle allowing it?
Regards, Govind
orafind.blogspot.com - Oracle power
August 26, 2008 - 7:51 pm UTC
in 9i, it is deprecated (utl_file_dir, the entire parameter is)
We allow you to "grant drop any table" too - doesn't mean you should grant it to everyone.
Use sense....
Security risk
Govind, August 22, 2008 - 8:07 pm UTC
Tom,
I like your comment about utl_file_dir value of * and risks associated with it, gave me a new perspective.
However, should it not be treated as a security whole in Oracle? Why is Oracle allowing it?
Regards, Govind
orafind.blogspot.com - Oracle power
Greg Norris, August 26, 2008 - 11:38 am UTC
[QUOTE]
I like your comment about utl_file_dir value of * and risks associated with it, gave me a new perspective.
However, should it not be treated as a security whole in Oracle? Why is Oracle allowing it?
[/QUOTE]
I'd call this a problem with the DBA (or whoever was playing at the role), rather than with Oracle. The platform would be far less useful if Oracle simply disallowed any risky configuration, since it has no way of knowing the underlying reasons, or the internal controls that an organization has implemented to mitigate the risks.
link is down
Ananya, June 02, 2010 - 1:05 pm UTC
June 08, 2010 - 9:58 am UTC
You'd have to ask the owners of evdbt.com that question, wouldn't you?
problem for CBO, handling many and optional parameters in a single query
abc, February 25, 2011 - 5:42 am UTC
Version Oracle Database 10g Release 2.
Please see the below query, there is no need to see it in depth, just what I want to point is that, look at the no. of optional parameters in the WHERE clauses,
the blanks (' ') in where clause means the user has left the parameter blank.
Actually this is a query generated by our Oracle Utilities application (CC&B which oracle recently bought from SPL) in the back ground dynamically, when the user input some parameters in the interface. The no. of parameters on the interface are many and many of them can be optional, the programmer of the application tried to cater every possible input scenario in one single query. Therefore, the problem is that it is very difficult for the CBO to generate a good plan in some input scenarios, if we tune this query for one set of input parameters (i.e. we create matching indexes, setting spfile parameters etc.), the execution of other set of input parameters gets slow. There can be many scenarios, and therefore we have failed to tune in such a way that the CBO generates efficient plan for all possible input scenarios.
My questions are:-
1- Isn't it bad development practice, to try to cater all possible input scenarios in one query? Specially when there are many optional parameters.
2- If the answer to no. 1 is NO, then question arises, is there any feature in 10g or 11g which can be used to solve this problem.
3- Anything else you can suggest to solve this problem.
SELECT
MTR.MTR_TYPE_CD,
MTR.BADGE_NBR,
MTR.MTR_ID,
MTL.DESCR,
MTR.SERIAL_NBR,
MFL.DESCR,
MDL.DESCR,
MLH.LOC_DTTM
FROM
CI_MTR_LOC_HIS MLH,
CI_MTR MTR,
CI_MFG_L MFL,
CI_MODEL_L MDL,
CI_MTR_TYPE_L MTL
WHERE
(MTR.MTR_TYPE_CD > ' '
OR
(MTR.MTR_TYPE_CD = ' '
AND MTR.BADGE_NBR > ' ')
OR
(MTR.MTR_TYPE_CD = ' '
AND MTR.BADGE_NBR = ' '
AND MTR.MTR_ID > ' '))
AND MLH.MTR_ID = MTR.MTR_ID
AND MLH.STK_LOC_CD = 'DKG-N'
AND MLH.LOC_DTTM =
(SELECT MAX(MLH2.LOC_DTTM)
FROM CI_MTR_LOC_HIS MLH2
WHERE MLH2.STK_LOC_CD = 'DKG-N'
AND MLH2.MTR_ID = MLH.MTR_ID
AND MLH2.LOC_DTTM <=
to_date('2007-04-16-00.00.00','YYYY-MM-DD-HH24.MI.SS'))
AND
('G1.6' = ' '
OR
('G1.6' <> ' '
AND MTR.MTR_TYPE_CD = 'G1.6'))
AND MTR.BADGE_NBR BETWEEN '
' AND 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ'
AND MTR.MFG_CD = MFL.MFG_CD
AND MFL.LANGUAGE_CD = 'ENG'
AND MTR.MFG_CD = MDL.MFG_CD
AND MTR.MODEL_CD = MDL.MODEL_CD
AND MDL.LANGUAGE_CD = 'ENG'
AND MTL.MTR_TYPE_CD = MTR.MTR_TYPE_CD
AND MTL.LANGUAGE_CD = 'ENG'
ORDER BY MTR.MTR_TYPE_CD
, MTR.BADGE_NBR
, MTR.MTR_ID
Thanks and Regards,
February 25, 2011 - 8:49 am UTC
1) it depends, but in general, yes I would agree with you.
In this case - they have done a lot of bad (horribly bad) things.
a) no binds.
b) implicit conversions left and right (it appears that way, hard to tell what is a number/date and string in this mess)
3) dynamic sampling at level 3 or 4 in the session that executes that query may well help.
http://www.oracle.com/technetwork/issue-archive/2009/09-jan/o19asktom-086775.html
review
A reader, February 25, 2011 - 5:25 pm UTC
Thank you for your useful reply.
1- You said it depends? can you please elaborate it.
3- The solution you offered, did you mean the solution is applicable/is-for the same query as above, or still we have to break that query into smaller ones for multiple input scenarios and then use the solution of dynamic sampling?
February 28, 2011 - 7:58 am UTC
1) in technology - there are only two answers to all questions.
a) "why" - to try and get to the bottom of why someone is trying to do something in a certain specific way. When someone asks me "how do I create a table in plsql", I always ask "why". The reason: they probably are converting something from sql server and in sql server you create temporary tables on the fly. you DO NOT do that in Oracle - we are different (they are different). I ask "why do you want to create a table in plsql" and then tell them the right approach.
b) "it depends" - everything depends. It is highly likely that what you said in #1 is correct - but there could always be cases whereby it is not necessarily true. It all depends on the context, the state of the environment, what you are truly trying to do. It depends.
3) I'm not following you at all here. The option of changing the query doesn't seem to exist does it??? It is a 3rd party application. The dynamic sampling would be an attempt to help the existing non-bind variable queries "optimize better"
it is using bind variables
abc, March 18, 2011 - 10:55 am UTC
The query is actually using bind variables every where, the one I pasted here is, in which I have replaced bind variables with literals to execute it in SQLPLUS.
So the query is slow only due to the reason of that the programmer tried to cater all possible input scenarios in one query.
Yes it is a 3rd Party Application and the redesigning query option is not for us, but the vendor does have this option, so I was thinking if I could suggest something like breaking query into smaller parts to the vendor.
March 18, 2011 - 11:03 am UTC
So the query is slow only due to the reason of that the programmer tried to
cater all possible input scenarios in one query.
You cannot make that claim, for we don't know how the programmer bound the inputs to that query - are there implicit conversions - we have no idea - because you didn't really tell us what they are doing.
point them here for a useful technique:
http://www.oracle.com/technetwork/issue-archive/2009/09-jul/o49asktom-090487.html
Values for transactional/OLTP still valid in 11R2
Dieter Oberkofler, April 27, 2012 - 5:14 am UTC
I was wondering, if your recommendations for transactional/OLTP systems are still valid for 11R2 and if so why Oracle would not change it's templates when installing a new instance ?
o The settings of OPTIMIZER_INDEX_CACHING = 90 and OPTIMIZER_INDEX_COST_ADJ = 25. These are
typically appropriate for many transactional/OLTP systems.
April 27, 2012 - 8:18 am UTC
recommendation is to NOT TOUCH any of the optimizer parameters, let them all default.
set memory sizes and not too much more (maybe processes, open cursors, controlfiles, undo settings and recovery options but that is about it)
ops$tkyte%ORA11GR2> show parameter optimizer_inde
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
in 9 and up, just let them default.
Jess, August 21, 2012 - 10:14 am UTC
Hi Tom,
I have to say, this is the first time I'm seeing the "for <do the work> loop null" logic, and I'm intrigued. How would that work when the 'for' provides the dates for partition pruning in the query? That is, for example,
FOR REC in (SELECT TO_CHAR (pdate, 'yyyy-mm-dd') purchase_date FROM date_lookup_tbl)
LOOP
INSERT INTO some_table (col1, col2, col3, col4.....)
SELECT a.col1, a.col2, b.col3, a.col4.....
FROM a, b --each is massive with daily partitions on purch_date
WHERE a.id = b.id
AND A.PURCH_DATE BETWEEN TO_DATE (rec.purchase_date || '00:00:00', 'yyyy-mm-dd hh24:mi:ss')
AND TO_DATE (rec.purchase_date || '23:59:59', 'yyyy-mm-dd hh24:mi:ss')
AND B.PURCH_TRADE BETWEEN TO_DATE (rec.purchase_date || '00:00:00', 'yyyy-mm-dd hh24:mi:ss')
AND TO_DATE (rec.purchase_date || '23:59:59','yyyy-mm-dd hh24:mi:ss')
AND A.SHIPPED_DATE BETWEEN TO_DATE ('20120101 00:00:00','yyyymmdd hh24:mi:ss')
AND TO_DATE ('20120131 23:59:59,'yyyymmdd hh24:mi:ss');
COMMIT;
END LOOP;
I am struggling to see how to re-write it so that all the logic is done outside of the loop.
Thanks as always.