Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Sokrates.

Asked: April 30, 2008 - 6:21 pm UTC

Last updated: July 15, 2008 - 9:44 am UTC

Version: 10.2.0

Viewed 1000+ times

You Asked

what do you think of the technique used in
http://jonathanlewis.wordpress.com/2008/04/27/manual-optimisation/
?

I would like to hear your opinion.
It's very interesting but I think we should not use it, am I right ?

and Tom said...

well, I've written about such things myself in the past... Way back in Effective Oracle by Design for example...

These two (Jonathan's idea and mine below) are very similar - except for the fact that I refuse to return 300,000 rows to the client! I agree with Jonathan at the start of his paper - that there is probably some huge mistake being made there. I limited hits to 500 at most - and even then, I thought 500 was probably an order of magnitude too large.


so, I would have fought long and hard, hard and long with the fact that 300,000 rows are coming out - I'd need a really good justification.

And then, if I lost such a battle, and the sort was the absolute killer (see last paragraph below, same sentiment shown there - optimize to the resource that will kill you - if CPU and RAM for sortspace is your weak point...) then his query would be ok.


I like to call his approach "understanding the physics of the data", you have to have a really good understanding of how data is accessed, what is happening (NOT INTERNALS, I am not talking INTERNALS - I'm talking about how indexes are used, what a sort does, how the data is laid out, and so on) and what you can do about it.

Just as a note: you'd have to be very very careful with that hinted approach. I've said many times - the only way to get sorted data from Oracle is to use an ORDER BY statement. Jonathan is using a set of hints which may or may not work in the future - or even today under certain circumstances (see his link, he caveats that). As long as the plan turns out to be what he is expecting - it should work - but things change - so any patches to the database would have to include a heavy test of this query - to ensure a change to the optimizer doesn't change the plan.

<quote>
Optimize to Your Most Frequently Asked Questions

If you have a system you know will be executing some given query or set of queries hundreds of times per minute, or even per second, that system must be designed, optimized, and built around those queries.

For example, my workplace once had an internal system called Phone. You could telnet into any email machine (back when email was character mode), and on the command line, type phone <search string>. It would return data like this:

$ phone tkyte
TKYTE Kyte, Tom 703/555 4567 Managing Technologies RESTON:

When the Web exploded in about 1995/1996, our group wrote a small web system that loaded this phone data into a table and let people search it. Now that it was in a database and had a little GUI to go with it, it started becoming the de-facto standard within the company for looking up information about people. Over time, we started adding more data and more fields to it. It really started to catch on.

At some point, we decided to add a lot more fields to the system and rebuild it with more features. The first thing we did, based on our knowledge of how people would use this simple little system, was to design the tables to hold this data. We had a read-only repository of data that people would be searching, and it needed to be fast. This system was growing in popularity every day and was threatening to consume our machine with its resources. We had a single 67-column, 75,000-row table upon which we wanted to perform a simple string search against various fields. So, if a user put in ABC, it would find ABC in the email address, or the first name, or the last name, or the middle name, and so on. Even worse, it would be interpreted as %ABC%, and the data was in mixed case.

There wasn¿t an index in the world that could help us here (we tried them all), so we built our own. Every night, as we refreshed the data from our human resources system (a complete refresh of data), we would also issue the following after we were finished:

CREATE TABLE FAST_EMPS
PCTFREE 0
CACHE
AS
SELECT upper(last_name)||'/'||upper(first_name)||'/' ¿. || '/' ||
substr( phone, length(phone)-4) SEARCH_STRING,
rowid row_id
FROM EMPLOYEES
/

In effect, we built the most dense, compact table possible (PCTFREE 0) and asked that it be cached if possible. Then we would query:

select *
from employees
where rowid in ( select row_id
from fast_emp
where search_string like :bv
and rownum <= 500 )

This query would always full-scan the FAST_EMP table, which is what we wanted, because that table was, in fact, our ¿index.¿ Given the types of questions we were using, that was the only choice. Our goal from the outset was to minimize the amount of data that would be scanned, to limit the amount of data people would get back, and to make it as fast as possible. The query shown here accomplishes all three goals.

The FAST_EMP table is typically always in the buffer cache. It is small (less than 8% the size of the original table) and scans very fast. It has already done the work of the case-insensitive searching for us once (instead of once per query) by storing the data in uppercase. It limits the number of hits to 500 (if your search is broader than that, refine it; you¿ll never look at 500 hits). In effect, that table works a lot like an index, since it stores the ROWID in EMPLOYEES. There are no indexes employed on this system in order to do this search¿just two tables.

Before settling on this design, we tried a couple of alternative approaches:
· We tried using a fast full scan on a function-based index (close, but not as fast).
· We tried interMedia text (not useful due to the %ABC% requirement).
· We tried having just an extra field in the base EMPLOYEES table (wiped out the buffer cache because it was too big to full-scan).

It may seem foolish to have spent so much time on this detail. However, this one query is executed between 150,000 and 250,000 times a day. This is two to three times a second, every second, all day long, assuming a constant flow of traffic (and we cannot assume that, since we frequently have spikes in activity, as most systems do). If this single query performed poorly, our entire system would fall apart, and it is just one of thousands of queries we need to do. By determining where our weak points would be¿the lowest hanging fruit, so to say¿and concentrating on them, we were able to build an application that scales very well. If we had tried the tuning-after-the-fact principle, we would have found ourselves rewriting after the fact.
The point is that when developing a system, you need to give a lot of thought to how people will actually use that system. Consider how physical organization can matter, and realize the impact a simple two times increase in logical I/O might have on your system¿s performance.

</quote>

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

not internals ?

Sokrates, May 01, 2008 - 3:58 pm UTC

I think, your example is quite a bit different from Jonathan's

I think, he uses internals.
for example, I can't find "no_eliminate_oby" in the doc or even on metalink.

and you have no guarantuee that Jonathan's query really returns the data in the order wanted, have you ?
The optimizer may decide to eliminate the order by in the inner query, isn't that true ?
Jonathan says, due to the no_eliminate_oby-hint, it won't
But it could it Oracle 12, 13, ... ?

So, I tend to say, his "manual optimization" is not recommendable

Your example doesn't use any undocumented stuff at all
Tom Kyte
May 05, 2008 - 2:56 pm UTC

Hints are directives unless we feel not doing something else - meaning - a patch, anything - could totally break that.

definitely a system using "ignore hints" parameter in the init.ora would break it.

I now think this is rather dangerous - it could be, if someone doesn't know what it is doing - how it is doing it - what it relies on - and the fact that every patch/upgrade would have to be tested to make sure the plan did not change - you might even want to lock statistics on the underlying tables to make it so nothing changes.

And now with 10g or 11g?

Duke Ganote, May 01, 2008 - 5:47 pm UTC

Would you follow the same approach with 10g or 11g, or are there new features you'd use instead? If you weren't caching (or maybe even if you did?) I assume you'd COMPRESS instead of just PCTFREE 0.
Tom Kyte
May 03, 2008 - 8:34 am UTC

well, since we built a single string via concatenation - it would not work to compress - table block compression works by factoring out redundant strings on a block - it would not be likely to have two strings with all of the same information in this example...

It is still implemented the same way today.

thank you

Sokrates, May 02, 2008 - 11:41 am UTC

"I now think this is rather dangerous "

that was my opinion

Thank you for confirming that
Tom Kyte
May 02, 2008 - 12:27 pm UTC

he is going to caveat that page with a warning...

Warning in place

Jonathan Lewis, May 02, 2008 - 1:14 pm UTC

Although I have reasons for arguing with Tom on this one - only about the degree of threat, and what you do to mitigate it - I have put a big warning at the start of the note to point out that Tom and I do not agree about the safety of using this mechanism.

There's lots more to be said around this topic - I've even started sketching out a presentation about the whole area of how you assess the risk when using strategies that depend on understanding the mechanisms. It's probably worth a good 90 minutes - and there will be one or two blog items following up on the original posting.


Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk


Great example

A reader, May 05, 2008 - 12:02 pm UTC

Great example of having your own index.

Hmmm

Greg Solomon, May 06, 2008 - 6:25 am UTC

Just wondering about using optimiser plan stability with this. What if we stored an outline and then changed the query so that e.g. the from clause also contained (SELECT 1 / COUNT(*) FROM user_outlines WHERE NAME = 'OUTLINE_FOR_JL_SORT') ?

Then if the plan isn't there, the query will throw a division by zero. Or I guess we could call out to a function to throw a custom exception, that might look a bit less script-kiddie.

Any use ?
Tom Kyte
May 07, 2008 - 12:48 am UTC

well, you'd have to make sure that the outlines were enabled as well - just having the outline doesn't mean it'll use the outline, until you enable them.

"you might even want to lock statistics on the underlying tables to make it so nothing changes"

Sokrates, May 06, 2008 - 12:58 pm UTC

I do not understand this remark, what do you mean by that ?
The software evaluating the statistics does change when you are patching/upgrading ?

Welcome to Germany anyway.
Hope you have the opportunity to enjoy the nice spring a bit.

Looking very forward to see you Thursday in Munich.
Tom Kyte
May 07, 2008 - 12:57 am UTC

I am saying "all other things held constant" - every time you patch, you'd have to test.

every change, I think you would want to test.

statistics are a change. I'd want to keep all inputs into the optimizer "constant". Locking statistics on the underlying tables would keep most of the inputs to the optimizer 'constant'

"Would you follow the same approach with 10g or 11g..."

Connor, July 15, 2008 - 1:04 am UTC

Probably a look at the prefix/suffix extension to text indexes might help with the "%ABC%" requirement (but oh my goodness, those kind of indexes get very big very fast...)
Tom Kyte
July 15, 2008 - 9:44 am UTC

I probably would follow the same path with 10g and 11g (well, we do...)

as those prefix indexes can get rather large rather fast

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.