Tom, when one is confronted with a 3 page explain plan.."make it faster" where does one start? What are your favorite books for this? Do you like Christian Antognini's book? Some books of your own you would recommend?
I have not personally read Christian Antognini's book - but I know him and would recommend it based on that. He knows what he is talking about.
I start with the method I outlined in your previous question regarding the bad plan selection.
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4043225300346897444 I look for estimated card values that are orders of magnitude away from the actual card. If I think I have a bad plan - it is going to be caused by that. So, I look for that and then if I find it - I ask "why is the optimizer having a hard time estimating the correct cardinality.
It could be as easy as something like:
where trunc(date_col) = to_date( '01/01/2011', 'dd/mm/yyyy' );
Here, the optimizer doesn't know anything about trunc(date_col), it knows a lot about date_col - but not f(date_col)
So, how would I fix that?
I could collect statistics on trunc(date_col) in 11g (extended statistics)
I could use dynamic sampling (9i)
I could use profiles (10g)
I could rewrite the query as:
where date_col >= to_date( '01/01/2011', 'dd/mm/yyyy' )
and date_col < to_date( '01/01/2011', 'dd/mm/yyyy' )+1
(that is my preference actually!)
So, you would look for the reason why the estimated cardinality is off - which leads to a bad plan - which we need to fix.
Now, what if the estimated cardinalities are dead on and you still think "this query should go faster". Then you are in the realm of "rewrite the query". Forget about the existing plan, it is not relevant - you are going for a brand new plan - from a brand new query.
In many cases, it would be best to ignore the existing query altogether. Try to write down the "specification" - what does this query really need to do. Start fresh, don't burden yourself with the legacy of the existing query (which might not even be correct!).
You also need a complete understanding of the schema - all constraints, all rules. For example - this is a query I was asked to tune once:
ops$tkyte%ORA11GR2> SELECT COUNT(*)
2 FROM T1, T2, T3
3 WHERE T2.order_id = T1.order_id
4 AND T2.service_order_id = T3.service_order_id (+)
5 AND T3.related_service_order_id = TO_NUMBER(:v0);
Now, just looking at that query - the only thing you could "tune" would be to remove the (+) as it is not necessary. Since they have "t2.related_service_order_id = to_number(:v0)", we KNOW that we don't need to outer join to T3 - if we didn, t3.related_service_order_id would be NULL sometimes - but null is not equal to anything so... the row produced by an outer join would be rejected anyway.
But, that wouldn't tune anything - since the optimizer is smart enough to recognize that, even if the developer isn't. So, it wasn't hurting anything.
But, could you reduce that query - the query I reduced it to was:
ops$tkyte%ORA11GR2> SELECT COUNT(*)
2 FROM T3
3 WHERE T3.related_service_order_id = TO_NUMBER(:v0);
How did I do that? I did that by figuring out that
nothing is selected from T1 in the output. There is a primary key on T1(order_id). T2(order_id) happens to be not null and is a foreign key to T1. Therefore in a join of T1 to T2 by that key - all rows in T2 will appear, T1 is not used to reduce or multiply T2 - therefore, T1 is not needed.
Then I realized that nothing is selected from T2 in the output - t2(service_order_id) is the primary key - which is joined to T2(service_order_id), which happens to be not null and a foreign key to T2 - so goodbye T2.
But, the only way I could do that reduction was to understand the data model... If I didn't know the constraints - all of them - I could not do that.
So, understanding the question and understanding the data model - excessively, extremely, vitally important.
(by the way, 11g will do that reduction above for us - it will rewrite the query for us - IF and only IF the constraints are in place)..
Lastly - after you have the specification, understand the data model and constraints - what you need is.....
A mastery of SQL - a really good knowledge of what exists. For example, if you had a query such as:
select emp_hist.ename, emp_hist.status_date, emp_hist.something
from emp_hist, (select empno, max(status_date) max_status_date
from emp_hist
group by empno) emp_hist2
where emp_hist.empno = emp_hist2.empno
and emp_hist.status_date = emp_hist2.status_date
and you didn't know about "keep"
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4003695400346079588 you wouldn't know how to rewrite this query using it (for a single pass against the table instead of two!)
or if you didn't know about analytics - you wouldn't know how to rewrite it using that.
And so on.
(You don't know what you don't know in short)
If there were a set of steps - one by one - that you could follow to tune - we would have turned that into software.
Oh wait, we did :) The automatic sql tuning stuff in 10g and above - it does that. It recommends rewrites, additional statistics to gather, indexes to add, materialized views to consider, partitioning schemes....
I wrote about that as well
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:7381085186834