Good Lesson in Critical Thinking
A Lurker, February 24, 2005 - 8:48 am UTC
This is a great article! Not so much because it uses experiments and results to prove its assertions and answer questions. Rather it is great because it demonstrates the value of critical thinking and a scientific approach to database administration. Remember, we study Computer _Science_, so let us treat it as science.
I have worked in quite a few database shops and I can tell you that there are many who are held out as gurus that are not. In one shop the senior DBA who was brought in as great expense and given (too) much authority by management held that all tables must be reorganized when they get too many extents. In fact, he wanted all tables to be in one extent and if a table grew to over 100 extents he would pretty much throw a tantrum. So much for high priced gurus. In another shop, they brought in some consultants for a project and these consultants had told them to use the RULE hint on all queries this was appropriate for only one release of one version of Oracle, years ago but they were still doing it years after it was no longer relevant because this is what they had always done and insisted that the consultants were the experts. These same consultants had coded a (large) number of cursor loops with interim commits select across commit and when the transaction threw an 0RA 1555 error their advice was to commit more frequently. So much for high priced consultants. None of these high priced DBAs and developers were ever challenged to prove their assertions.
The same applies to anyone who writes a book, sets up a web site, or hangs out his consulting shingle. Make him prove his assertions with empirical evidence. I want something I can touch, examine, feel, and measure. Not just some assertion that appears to reasonable.
So learn the facts that Tom has to share, but always take from Tom his main lesson that appears time and time again in this web site TEST and PROVE.
Thank you Tom.
Table re-organisation
Jonathan Lewis, February 24, 2005 - 8:57 am UTC
Tom,
Just one extra point that goes with the table re-org idea. It's one of those things that people suggest from time to time because it seems easy. But just because it's easy to do, it's not necessarily the right thing to do.
If a table has a lot of migrated rows then, as you say, there is a reason for it. The most likely reason is that their rows start small and grow over time. If they do a re-org, what value do they set for PCTFREE as they re-org ?
If they use a value that is appropriate for NEW rows it will be larger than it's current value, and they will be wasting a lot of space in blocks which are 'full' of old, complete, rows.
If they use a value that is appropriate for most of the existing data, then it will (should) be close to zero, which means all the newer rows will immediately start migrating as they are subsequently updated.
A table re-org is NOT a trivial exercise - especially if it is really needed (even if you use the online redefinition package).
(You've probably got the minimize_records_per_block trick somewhere on your website - but how many people know of it, I wonder).
February 24, 2005 - 9:20 am UTC
Jonathan - thanks, the table with migrated rows is a tricky one to fix for sure.
and constant reorgs do not fix anything. My big problem with reorgs at the end of the day is that people normally do them "offline". They say to mgmt "Oracle needs constant reorgs, i have to take you offline this weekend".
It just ain't so. reorgs are the exception and you need to have an understanding of the environment, just just a bunch of numbers and ratios.
Reorg = risk
Connor, February 24, 2005 - 9:27 am UTC
The other thing the the article seems to gloss over is that a reorg is always
a) an outage
Even with the super-duper online facilities in v9/v10, there is still some albeit brief outage.
b) a risk
But more importantly, is that risk, no matter how small, that something might go wrong during the process, or that things might get worse once the process has completed. And if things *do* go wrong, then you can't un-reorg a system.
Reorg's more often than not are a CLM (Career Limiting Move) for a DBA.
February 24, 2005 - 9:32 am UTC
true story:
o system going slow
o no idea "why"
o had changed 6 months ago from "analyze table t compute statistics" to using dbms_stats, used a method opt that skipped column stats
o decided "reorgs CANNOT hurt, can only help or stay the same"
o so they exported, dropped and imported (dangerous but what the heck)
ran dbms_stats and for the first time, no column stats (they had "legacy" stats from when they used analyze, was better than nothing)
soooo..... monday morning, that reorg that could not hurt? shut them down, plans went to heck. Took a while to figure out "what went wrong" (it was like a mystery to be solved, detective work, no one wanted to admit to anything -- it was hard)
reorgs cannot hurt, can they....
What is mgmt?
A reader, February 24, 2005 - 9:28 am UTC
"They say to mgmt..."
U r not being clear.
Sorry to bust ur chops, but what is good for the goose...
February 24, 2005 - 9:32 am UTC
very good - I truly try to not abbreviate, point taken.
Great !
Georg, February 24, 2005 - 10:35 am UTC
Thanks Tom. As usual great scientific proof !
fantastic
jw, February 24, 2005 - 11:16 am UTC
A reader, February 24, 2005 - 1:05 pm UTC
Oh Tom, you almost scare to death anyone who want to do a mere followup... Nevermind, I agree with what you say!
Great work.. I agree that this is a great study in critical thinking.
Could you pls explain what is this "minimize_records_per_block trick" JPL was referring to?
February 24, 2005 - 5:18 pm UTC
instead of setting the "perfect pctfree" for newly added rows, you would figure out "what is utlimately the best number of rows per block when they 'get full' (when the rows fill out)"
Lets say that the rows are about 1,900 bytes when filled out (but start really small).
so, on an 8k block, that is about 4 rows per block.
So, you fill the table up with 4 small rows (all on a block)
You alter the table and set the minimize_records_per_block (it'll be 4)
now you delete the 4 records.
and load up the table, instead of using pctfree to reserve space on a block you say "this is the most records I want on a block, since when they flesh the row out fully, only this many will fit"
interesting link from the same site
Menon, February 24, 2005 - 3:15 pm UTC
</code>
http://dba-oracle.com/dress_code.htm <code>
I love the
"The suit must be dark blue, gray or charcoal. People can spot a cheap suit at 300 yards, and so can our clients. Invest in a good suit."
and especially the statements
"Protruding nasal hair is prohibited and all tattoos must be fully hidden.
Cologne - Cologne and after-shave is optional, but if it is used it must not be too strong as to call attention to yourself in a closed elevator.
No Neanderthal-style single eyebrows are allowed, and you should shave any loose hair on your forehead or the palms of your hands. All ear hair must be shaved, and no "Admiral Zumwalt" giant eyebrows. If you must wear a toupee, have it custom fit and glued-on securely. No Trump-style comb-overs, please. "
Statistically speaking, I suspect there must be at
least one "Trump-style" comb-over dba who happens to be
also good, no?!;)
February 24, 2005 - 5:44 pm UTC
I'll stick with the technical stuff ;)
Un-balanced freelists
Donald K. Burleson, February 24, 2005 - 7:31 pm UTC
Hi Tom,
>> You opined: Given that freelists cannot be unbalanced, I'm not sure what to make of this.
Huh? Then why does Oracle have the dbms_repair.rebuild_freelists procedure?
When you have multiple freelists (say 50 per table), the freelists are independent from each other, and Oracle cannot share blocks between freelists. A non-parallelized delete job will populate only one of the 50 freelists, leaving a table with loads of free blocks that only one of the 50 concurrent updates can use.
I also suspect that ASS management will do the same thing, only populating one of the 50 freelists and leaving the other 49 empty.
I see unbalanced freelist lengths it all the time . . .
February 25, 2005 - 1:53 am UTC
No, I didn't "opined", I stated factually.
there are 5 blocks per freelist, there is a master freelist and as we need them, we move 5 blocks at a time from the master freelist to the process freelists.
have you read the docs for this package? it does mention the master freelist, it does say it'll redistribute over freelist groups (very different from freelists themselves). freelist groups can get unbalanced, but not freelists.
it does not work as you describe.
*and* even if it did Don - to what end would it affect insert (assuming you meant that) performance?
and ASSM does not use freelists (so no, it won't)
And -- if you see it "all of the time" please do describe what the symptons are, how you measure it, how it affects performance and what the fix does for us? You should be able to show that somehow shouldn't you?
Donald K. Burleson, February 24, 2005 - 8:05 pm UTC
Hi Tom,
By the way, what is your take on my challenge?
There are some scientists who believe that predictive models can be created for ANY fully-quantifiable task, even a very complex one with hundreds of variables.
On the other hand, there are those who believe that complex interactions (i.e. tornado prediction) are inherently semi-structured and random (the science of chaos) and can never be solved with a predictive model.
So, given the wealth of time-series data in 10g AWR, do you think that such a model be created, or not?
February 25, 2005 - 1:56 am UTC
read my last sentence.
I was having issues with your "precepts" which are fundementally flawed. Any predictive models based on flawed assumptions would be *wrong*.
I think that if you read my answer above, the thoughts I have (that you need to also understand HOW things are used and that you WANT TO FIX IT ONCE, not over and over....)
I'd rather predict at design time that we need an IOT or cluster....
or that pctfree should be set
or that partitioning should be used
or .....
rather than try to be a "hero dba" that swoops in and appears to fix something for a day or a week -- only to have to do it again and again.....
It's all aboun Sonny and Cher....
Un-balanced freelists
A reader, February 24, 2005 - 8:46 pm UTC
DKB: Huh? Then why does Oracle have the dbms_repair.rebuild_freelists procedure?
In order to fix corruptions associated with freelists or freelist groups. Read the doco.
DKB: When you have multiple freelists (say 50 per table), the freelists are
independent from each other, and Oracle cannot share blocks between freelists. A
non-parallelized delete job will populate only one of the 50 freelists, leaving
a table with loads of free blocks that only one of the 50 concurrent updates can
use.
Incorrect. As Tom has mentioned, freelists can not be unbalanaced, as free blocks go to the master freelist. You might be getting confused with freelist groups which is entirely different. Again, read up on the doco.
DKB: I also suspect that ASS management will do the same thing, only populating one
of the 50 freelists and leaving the other 49 empty.
Incorrect. ASSM doesn't even use freelists, but a set of bitmap blocks so the above doesn't make any sense. Read up on the doco.
DKB: I see unbalanced freelist lengths it all the time . . .
No you don't. You just think you do because you don't read enough documentation !
A reader, February 24, 2005 - 9:29 pm UTC
DKB, I think Tom already opened his mind on your "predictive model challenge", whatever that means. In essence, what he said is that you dont look for a "genaralized predictive model" as you propose. It is meaningless to strive for one. You cannot start building such a model or a set of rules, with a few precepts because those precepts are not valid everywhere. It depends, it changes from system to system. I would actuallly say Oracle itself is that model, because it gives you the dials and buttons to control it exactly according to what you want.
If you take sometime to read some of the 'most popular' asktom articles (link on the title bar of this page), you can probably see how Tom's philosophy is different from yours. One specific article I would recommend is the one about index rebuilding, where Tom proposes that indexes have characteristics dictated by the nature of the application - they reach their state of equilibrium and all you can do by rebuilding an index is to imbalance that equilibrium, causing the system to move towards it again!!
BTW, (and no offense meant) don't you think $250 worth of Tuning books (probably authored by yourself) is kinda little low a price? How about a copy of "Expert one-on-one"?
February 25, 2005 - 4:42 pm UTC
Actually, I utterly believe in a predictive model -- just one the other end of the spectrum.
For you see, if you tell me HOW the table is to be used, we can PREDICT what'll happen:
case) we insert data with 20 null columns. Over the next 30 days, the 20 null columns will be populated.
Can anyone guess which of the "precepts" that will map to? And what is the fix? (a PROACTIVE fix, not a reactive reorg over and over and over....)
case) we get stock quotes every day -- stock symb, date, close price, volume sold. We have analysts that work "by stock" (eg: I analyze ORCL, IBM and MSFT). They tend to look at the last week, month, 6 months and year of a stock. But every day we get a file we load. That makes ORCL 01-jan-2005 go onto table block 1, and ORCL 02-jan-2005 go onto table block 42 and so on (the way we analyze is "where stock = :x and date between :a and :b" in general -- but the data in the table is spread out over as many blocks as possible!!)
Can anyone guess which is the "precepts" that will map to? And what might be the fix? (Again, a PROACTIVE fix, not a reactive reorg over and over and over and over ....)
And so on.
Tell us how you USE the data and we can predict
a) what'll happen to it if you just use a HEAP table with defaults and a B*TREE index with defaults
perfectly.
So, he is just about a year late on the prediction side -- after the damage is done.
I'd rather fix it before it starts or at least recognize WHY it happens and fix it once.
This stuff is actually very easy to predict. If you know the structures and how they work.
The problem is with "I think" school of professionals
NM, February 25, 2005 - 3:26 am UTC
Don said "I also suspect that ASS management will do the same thing, only populating one
of the 50 freelists and leaving the other 49 empty.
I see unbalanced freelist lengths it all the time . . . "
The words "suspect", "think", "opine","take"; phrases like "see it all the time" -- why do these even appear in a technical writing?
The difference between Tom and these guys is exactly there. Tom doesn't BS us by using heavy jargon mumbo-jumbo (chaotic / predictive modelling -- gimme a break! It's Oracle database after all) - most of his illustrations are doable in Sql*Plus and are self-complete in their scope.
Everytime I read an article from "I think" school of Oracle "specialists" - I thank Tom for being there and starting the "I prove" school at the first place.
P.S. The "ASS Management" was not funny. Really!
February 25, 2005 - 5:04 pm UTC
Slide from pretty much every seminar type presentation I do:
There are lots of experts out there
o Make them prove everything
o Statements that should raise your eyebrows:
- It is my opinion...
- I claim...
- I think...
- I feel
- I know (this is the most evil of them all)
- It always worked that way
o Things change, expect that.
o It only takes a single counter case
o Nothing is 100% good, nothing is 100% evil
- it is about understanding when to do what and as importantly
when not to do what!
Classic thread
John Townsend, February 25, 2005 - 8:28 am UTC
This is a classic thread on why Tom, you're one of the most outstanding and respected Oracle professionals in the business.
And why Donald K. Burleson isn't.
Keep up the good work on showing us all how things really work rather than pretending to "see things all the time".
John
Donald K. Burleson, February 25, 2005 - 9:29 am UTC
I guess you know something that MetaLink does not:
From MetaLink note: 1029850.6: As can be seen from the algorithms above, using multiple free lists may cause some empty blocks to go unused, causing the segment to extend.
From Melissa Holman - Metalink - 12/19/01: The primary cost with using multiple process free lists or multiple free list groups is increased space usage"
*********************************************
Its ludicrous to get caught-up in what the word precept means, and Im really surprised that as Mr. Prove it you dont remember that this whole discussion came from when you chastise people for NOT making a proof of the benefit first:
What is the scientific basis in reality that is driving you to do this?
Let me make it perfectly simple, Tom, and see if we get a straight answer:
1 Rebuilding tables and indexes can sometimes improve SQL performance
2 You can get details about the structure of the table/index
3 AWR has a historical record of exactly how tables/indexes are accessed by SQL
So, could a proof be designed to predict (a priori) the reduction in I/O, or not?
Its really a simple question. . . . Please resist the temptation to go your Funk & Wagnall and give us a clear answer, if you can.
February 25, 2005 - 5:37 pm UTC
of course and what I said above proves that?
<quote from me above>
There is a master freelist where the free blocks
go, then each freelist pulls 5 blocks at a time from this master freelist. So,
each freelist will have about 5 blocks on it and the master freelist will have
"the rest". They (freelists) cannot really get "out of balance"
</quote>
so, if you think "5" is wacky out of balance -- so be it. I happen to think a freelist with 0 and others with 1..5 is not really "out of balance"
I actaully explained how it works. Master freelist, a freelist will pull 5 at a time.
Now, if 5 is wacky out of balance -- ok, whatever.
But as I also said above, let us say that they do get wildly out of balance, as freelist groups could. How does that relate to update (assuming you meant that) performance? It uses more space than it should but tell me Don -- what massive impact will this have on RUNTIME PERFORMANCE? I agree unbalanaced freelist groups will waste space, however -- <quote src=dkb> Table updates will run faster after reorganizations when the table has unbalanced freelists</quote>.
And say there were no master free list -- you do understand that as processes log in and out - they get assigned to different process freelists -- so it is highly doubtful that the "delete process" would always magically get assigned to the same unlucky freelist and that the "insert process" would always get the other one? A simple login/logout would cause you to get a different process freelist over time.
Freelist groups -- which belong to an INSTANCE -- can show this 'unbalancing' act easily in RAC and the older OPS -- for if you run the consumer on node 1, and the producer on node 2 -- you'll have a delete bunch of free blocks on node 1 that node 2 won't tend to reuse.
And hey, guess what, Mellissa Holman and I agree 100% -- The primary cost with using multiple process free lists or multiple free list groups is increased space usage"
so, I'm missing your point, other than you seem to be helping me make mine?
Why is it ludicrous to get caught up in the definition of terms? It (to me) is crucial for communication. I wanted all to know that precept means "law" and your "laws", your guiding factors -- well, they don't hold water very well. They are half true (not telling the whole story) or downright inaccurate as demonstrated (full table scans, truly chained rows, migrated rows -- for example. HOW would a reorg fix a truly chained row Don? )
I don't get this point at all:
<quote>
you dont remember that this whole discussion
came from when you chastise people for NOT making a proof of the benefit first
</quote>
can you clarify what you point was precisely? This whole discussion came from someone saying to me "what do you think about this" and I answered. I haven't had a discussion with you in a long time.
1) yes, but my point is so? sometimes Don shooting a gun blindly into a room with hostages and hostage takers will kill bad guys, sometimes kill good guys, sometimes kill no one, sometimes kill some of each.
Yes, I will agree that:
rebuilding will:
a) sometimes do good
b) sometimes do bad
c) sometimes do nothing whatsoever.
So -- what is your point here?
2) yes, no kidding, duh maybe even?
3) but your precepts above would lead people in totally the WRONG WRONG direction.
"Updates will be better after these nasty freelists are rebalanced" HOW SO?
"Full table large table scans with chained and migrated rows will run faster" HOW SO?
And it is true that index fast full scans will run faster, however like I said -- that be 1/2 the story - you left out what might be really truly important -- that other people are going to PAY THE PRICE. So, you sort of need to understand what is important in the system (and AWR can give you numbers, but it won't tell you what runs the business)
"Multi-block index range scans...." True -- but again, how many ways can a table be sorted? And should we be sorting and re-sorting data over and over? And what is the most important way to sort it? AWR won't answer that, statspack won't answer that. But someone that knows the business, they can.
And all of these things -- ALL of them -- can be predicted a year ahead of time, in design -- how about that. Lets build the predictave model that predicts the future, rather than analyzes the past!!!
(you are not really even putting forth predictive reorganization, you are asking for a REACTIVE reorganization script -- it is not predicting the need for anything, it is trying to say "hey, you got yourself a mess, maybe, but then again, maybe not -- cause I'm just numbers and I don't really know what is important to you"
And people really pay you money?
Scot, February 25, 2005 - 9:35 am UTC
Even after all of your precepts and assumptions are proven wrong on this very thread? And you just ignore that proof and continue posting as if it doesn't matter?
Amazing
sPh, February 25, 2005 - 10:05 am UTC
Probably the most amazing post I have ever seen in any educational forum anywhere. Thanks Tom!
> When you have multiple freelists (say 50 per
> table), the freelists are independent from
> each other, and Oracle cannot share blocks
> between freelists. A non-parallelized delete
> job will populate only one of the 50 freelists,
> leaving a table with loads of free blocks that
> only one of the 50 concurrent updates can use.
This reminds me of nothing more than the big battles over "disk defragmentation" in the Novell NCP filesystem environment in the 1980s. "Expert" after "expert" took to the technical journals and Netwire to demand that Novell open up its API so that "defragmenters" could be developed. Novell stated repeatedly that since the NCP filesystem used randomization to optimize performance there was no concept of and no benefit to "defragmentation" in that environment. "Conspiracy!" waa the cry in response to that "counter-intuitive" statement.
So in the end Novell relented and licensed that portion of the API to some developers. Result: a lot of very expensive "defragementation" products that either didn't affect or actually worsened disk performance.
I think Tom has access to better information on the design of Oracle, no? ;-)
sPh
February 25, 2005 - 5:50 pm UTC
you have the same access I do actually.
I read books
I test ideas
I use metalink (well, an internal version of it, it is in black and white and uses courier font -- just like, well, another site I use alot)
I discuss with others (inside and outside of Oracle, frankly - outside lots)
I do not read the source code
I do not access "internals"
I do not do anything anyone of you cannot do (in fact guys like Steve Adams go way deeper into the internals than I'll ever go)
So, this is not "insider knowledge"
And please don't take me for my word alone -- I know of these algorithms because of metalink (and I wrote about them in Effective Oracle by Design -- had really smart people reviewing that for correctness -- I think books should be judged more by their REVIEW team than the authors sometimes!)
see a couple notes down for a posting by Mark Bobak, pointing to the metalink note.
For Mr. Burleson
Bill Schwartz, February 25, 2005 - 10:32 am UTC
I have followed this thread (and others) with keen interest. Tom has always displayed the utmost professionalism in his replies, to the point of not mentioning the author or being derogatory in any way. When his opinions differ, he states so and includes why he feels that way. He has never attempted to perform a hatchet job on anyone else's writing, and has never ( to my knowledge ) tried to take credit for someone else's work. Having said that, I have also done a good deal of research on your web site, read your opinions ( and those of your staff ), and as all good researchers do have also visited other web sites as well (Jonathan Lewis, Conner MacDonald, etc.). My two cents (for what it's worth) is that I would prefer to find my misinformation on the web sites of self-proclaimed experts. When I need the right answer, I go to those people who will not feed me myth, will tell me when I am going off on the wrong tangent (and WHY) and show me (triple-underscore the "show me") the right way to get it done. Tom provides this web site gratis, spends time he doesn't have (I am sure as a V.P. and a family man he has precious little to spare) providing his years of experience for us to pick at. He also has no problem taking issue with the very company that employs him when it needs to be done. Mr. Burleson, you should pretend that the rest of us are form Missouri (as Tom does) - either show us you are right and WHY, or stop claiming to be right.
In TOM I trust.
Michael, February 25, 2005 - 12:17 pm UTC
I have leanred though years on experience, trust but verify. Well, I trust Tom and he has verified the precepts by use of examples.
Everything is gray, e.g. full table scan is not always bad.
February 25, 2005 - 6:30 pm UTC
better to say "nothing is absolute" perhaps.
Nothing is always true here, nothing is always false.
There are 'things' that have implementations and are useful 'in such and such a case' and not as useful 'in other cases'
Meaning, we've got a big toolbox full of hammers and screwdrivers, we use the right tool at the right time.
For the definitive word on Freelists and Freelist Groups.....
Mark J. Bobak, February 25, 2005 - 12:30 pm UTC
See MetaLink doc id 157250.1, "Freelist Management with Oracle 8i"
by Stephan Haisley. It's excellent!
Don, freelists don't get "unbalanced" cause:
- by default, there is only one free list, the master free list.
- If you add additional freelists, aka process freelists,
Oracle's algorithm does a mod on the PID by the number of
process freelists, to ensure that it can evenly distribute
access to the freelists.
- If a particular process free list needs more blocks,
it grabs them off the master free list.
- If you do updates or deletes, they automatically and
implicitly get their own transaction free list. This
free list is private to the transaction. When the
transaction commits, any blocks on the free list bubble
up to the master free list.
As you add more freelists to a segment, they don't
immediately and automatically get blocks linked onto them.
On demand, if and when necessary, blocks will be linked to
these new freelists, and pulled from the master freelist.
If necessary, the HWM will be bumped to populate the
master freelist. In turn, if necessary, a new extent will
be allocated when no space is left above the HWM.
So, yes, segments that have more than the 1 default
master freelist will have more blocks allocated to them.
Strictly speaking, this means that they are less "space
efficient". So what? Who cares? That's the "cost"
one pays for improved concurrency.
evidence on freelists
Alberto Dell'Era, February 25, 2005 - 2:25 pm UTC
This test of mine proves, to the best of my knowledge, that the number of free blocks on a freelist can never go above 5:
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:12209165088566#12212146145257 <code>
This test is the end result of the discussion with Tom that you can follow on the same page (i've basically extended Tom's test in "Expert", as stated in the review).
Interesting, i asked that q after reading the same Metalink note quoted by Mark J. Bobak (see the q text).
I think that the math on point 5 of the original q on a real case is interesting, too, to "visualize" the effect of the increase of the FREELISTS parameter.
my 2(euro)cents
Alberto
PS Tom has never contested the test results, would like him or anyone else to (try to) break it - in the pure spirit of a scientific discussion :)
February 25, 2005 - 6:39 pm UTC
I've never contested it because it works as I understand it to work.
Hey, I'd like to go back to my original comment to Don:
<quote>
And -- if you see it "all of the time" please do describe what the symptoms are,
how you measure it, how it affects performance and what the fix does for us?
You should be able to show that somehow shouldn't you?
</quote>
which he seemed to not address.
What are the symptoms of these massively unbalanced freelists? How did he detect them?
And when he did, how did he show it affects performance?
And what does the fix accomplish?
I mean, it sounds almost trivial doesn't it? Just create two sessions, one deleting and one inserting (that is his example). We should be able to show some degradation and how a freelist rebuild online fixes it all right?
Mac, February 25, 2005 - 2:56 pm UTC
</QUOTE>
Meaning, you didn't want to REORG at all (at least not more than once).
</QUOTE>
Yeay, THAN not then! Bravo Tom!
Of course, the core of your answer was on the money as usual, too.
February 25, 2005 - 6:41 pm UTC
I have been on a mental crusade for the last 1.5 to 2 years -- every time I type that word "then" -- I stop, look and see if than doesn't make more sense :)
Albert Einstein's quotes that came to my mind while going through this thread...
Moorthy Rekapalli, February 25, 2005 - 3:49 pm UTC
After going through this thread, I would like to present 3 quotes by Albert Einstein. They are:
1) "Example isn't another way to teach, it is the only way to teach"
Tom typically explains everything with examples by setting up a test case, autotrace, tkprof etc. He clearly demonstrates his rationale before arriving to a particular answer/conclusion. If someone has a different theory, please explain your rationale with examples rather than stating "predictive models can be created for ANY fully-quantifiable task". While reading this sentence, second quote rang in my head.
2) "It would be possible to describe everything scientifically, but it would make no sense; it would be without meaning, as if you described a Beethoven symphony as a variation of wave pressure."
For the people still thinking about reorg, rebuild on a regular basis - probably, third quote will be an eye opener.
3) "Insanity: doing the same thing over and over again and expecting different results."
No offense meant to anyone. We (Technical Professionals) all need to take a complex business problem and come up with a simple and elegant solution than vice versa. In the IT industry, I repeatedly see the opposite and hence the sky rocketing prices for developing software...
Thanks,
Moorthy.
February 25, 2005 - 6:47 pm UTC
I liked that, thanks ;)
INDEX LEAF BLOCK FRAGMENTATION
Tamilselvan, February 25, 2005 - 3:55 pm UTC
This is a contest I would not want to win.
Here is my view/comment about B*tree index.
I restricted to only Index Leaf Block Fragmentation.
1. Index Leaf Block Fragmentation
What is ILBF?
Oracles standard index structure is B*tree structure.
A B*tree is made up of 3 blocks or nodes:
Root Block ( Root Node): There is only one block in the root node. All accesses to the index starts from here. The
root node has many child blocks.
Branch Blocks: This is the middle layer. There is no restriction in branch blocks. The number of branch blocks
increases depending upon the number of leaf blocks. And they are arranged in multiple levels usually four (Still I have not found out the reasons why there are four levels at the branch levels). Regardless of deleted rows, branch blocks are not removed.
Leaf Blocks: This is the bottom layer and is made of leaf blocks where the actual index entries along with ROWIDs are stored.
When rows from table are deleted, all associated index entries are also deleted. This will fragment an index leaf blocks. As more and more rows from table are deleted, fragmentation level within leaf blocks increases. Some experts said deleted entries within a leaf block never get reused for the new index entries within a leaf block. because the application may not insert the same key values again in the table. This is not true. Deleted space will be reused at later stage I will demonstrate it later.
The performance impact of Index Leaf Block Fragmentation (ILBF) only significantly affects index scans and can be more severe than table block fragmentation. ILBF performance ramifications can make even the most seasoned DBA sweat.
Consider this situation. The Table T1 is described below.
SQL> desc t1
Name Null? Type
---------------------- -------- ----------
ID NUMBER
SYEAR NUMBER
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(18)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
I just populated around 300,000 rows in the table. Equal number of rows are populated for the years 2000,2001, and 2002.
SQL> SELECT COUNT(*) FROM TAMIL.T1 WHERE SYEAR = 2000 ;
COUNT(*)
----------
116545
Suppose an index is on columns SYEAR and ID. The ID col is populated with a sequence.
For some reason, all the rows pointing to the year 2001 are deleted. Hence, all the blocks from table data used by the rows for the SYEAR=2001 are added into the FREELIST. But What happens to the year 2001 leaf blocks in the index, which are spread throughout the index structure? They remain! The year 2001 leaf blocks become empty and remain, that is their space is not released.
Here's the performance killer. When a index range scan query is performed that must pass over the empty blocks, the blocks must be read from disk, placed into the data block buffer cache, just like any other block. You may be asking yourself: Why does Oracle put empty blocks in the buffer cache?
Answer: Even though they are completely empty, Oracle can NOT skip dead index leaf blocks during the index range scan.
This is one of the reasons for rebuilding the index after a large delete.
The following case study illustrates the above theory:
Created a unqiue index on T1 on columns SYEAR and ID.
SQL> create unique index t1_idx on t1(syear,id) tablespace users storage (initial 1m next 1m pctincrease 0);
SQL> select segment_name , bytes/1024/1024 ,
blocks, tablespace_name , extents
from dba_segments
where owner='TAMIL' and segment_name = 'T1_IDX';
SEGMENT_NA BYTES/1024/1024 BLOCKS TABLESPACE_NAME EXTENTS
---------- --------------- --------- ---------------- ----------
T1_IDX 8.125 1040 USERS 8
12:14:38 SQL> select object_name, object_id, data_object_id
from dba_objects
where object_NAME='T1_IDX' ;
OBJECT_NAME OBJECT_ID DATA_OBJECT_ID
--------------------- -------------- --------------
T1_IDX 64530 64540 ---------Used to join X$BH table
To find out the cached blocks, query from X$BH.
Note: Do not use object_id value to join with X$BH.
Only 2 BLOCKS of T1_IDX object are in the buffer cache.
12:15:45 SQL> select count(*) from x$bh where obj=64540 ;
COUNT(*)
----------
2
Get the explain plan for the query.
SQL> select syear, id from t1 where syear >= 2000 and syear <= 2002 ;
Execution Plan
----------------------------------------------------------
rows will be truncated
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=349635 Bytes=2447445)
1 0 INDEX (FAST FULL SCAN) OF 'T1_IDX' (UNIQUE) (Cost=24 Card=349635 Bytes=2447445)
Run the actual query:
12:11:00 SQL> set arrays 5000
12:17:22 SQL> set autot trace statis
12:17:29 SQL> select syear, id from t1 where syear >= 2000 and syear <= 2002 ;
349635 rows selected.
Elapsed: 00:00:02.51
12:15:50 SQL> select count(*) from x$bh where obj=64540 ;
COUNT(*)
----------
912
This is expected. Oracle has rightly put the all the leaf blocks into buffer cache.
Now, I delete one third of the rows from the table T1.
12:18:09 SQL> delete from t1 where syear=2001 ;
116545 rows deleted.
12:24:44 SQL> commit;
Commit complete.
I shutdown DB and satrtup again to clean up buffer cache.
SQL> select count(*) from x$bh where obj=64540 ;
COUNT(*)
----------
0
Ensured that no index leaf block is present in the buffer cache.
Run the actual query.
SQL> select syear, id from tamil.t1 where syear >= 2000 and syear <= 2002;
233090 rows selected.
12:27:06 SQL> select count(*) from x$bh where obj=64540 ;
COUNT(*)
----------
912
I see the same number of index leaf blocks in the buffer cache even after deleting one third of the rows from the table.
Here, I expected around 600 leaf blocks in the buffer cache where as Oracle puts 912 blocks.
This proves that Oracle can not skip dead leaf blocks during the index range scan.
To verify it, I analyzed the index and checked the DEL_LF_ROWS col value from the INDEX_STATS.
SQL> analyze index tamil.t1_idx validate structure;
Index analyzed.
SQL> select height, blocks, lf_blks, lf_rows, br_blks, br_rows
, del_lf_rows from index_stats;
HEIGHT BLOCKS LF_BLKS LF_ROWS BR_BLKS BR_ROWS DEL_LF_ROWS
-------- --------- ------- --------- ---------- -------- -----------
3 1040 908 349635 3 907 116545
In order to completely remove from the index leaf blocks, I have 2 options:
1 . Rebuild the index. Or 2. Insert new rows.
First I will rebuild the index.
SQL> alter index tamil.t1_idx rebuild ;
Index altered.
SQL> analyze index tamil.t1_idx validate structure;
Index analyzed.
SQL> select height, blocks, lf_blks, lf_rows, br_blks, br_rows
, del_lf_rows from index_stats;
HEIGHT BLOCKS LF_BLKS LF_ROWS BR_BLKS BR_ROWS DEL_LF_ROWS
----- ----- ------ -------- -------- ---------- -----------
3 650 600 233090 3 599 0
2. When does oracle reuse the dead leaf blocks ?
Many DBAs believe that dead leaf blocks will never be reused by Oracle. It is NOT true.
Oracle reuses the deleted space in the index as and when new rows are inserted into the table.
Now the table T1 has data for the year 2000 and 2002 after deleting the rows for the year 2001.
Now I inserted new rows for the year 2003.
SQL> insert into t1 select t1_seq.nextval, 2003, a.* from dba_objects a;
23311 rows created.
SQL> /
23311 rows created.
SQL> /
23311 rows created.
SQL> /
23311 rows created.
SQL> /
23311 rows created.
SQL> commit;
Commit complete.
SQL> select syear , count(*) from t1 group by syear ;
SYEAR COUNT(*)
---------- ----------
2000 116545
2002 116545
2003 116555 --------------- New rows inserted
The new rows for the year reuses the index leaf blocks previously used
by the rows for the year 2001. The next query proves that.
SQL> select segment_name , bytes/1024/1024 ,
blocks, tablespace_name , extents
from dba_segments
where owner='TAMIL' and segment_name = 'T1_IDX' ;
SEGMENT_NAM BYTES/1024/1024 BLOCKS TABLESPACE_NAME
----------- --------------- ---------- ----------------
T1_IDX 8.125 1040 USERS
After delete and insert, the number of blocks used by the index never went up. It remains same, 1040.
14:29:23 SQL> analyze index tamil.t1_idx validate structure ;
Index analyzed.
SQL> select height, blocks, lf_blks, lf_rows, br_blks, br_rows, del_lf_rows
from index_stats ;
HEIGHT BLOCKS LF_BLKS LF_ROWS BR_BLKS BR_ROWS DEL_LF_ROWS
------ ----- ------- ------- ------- -------- --------------------
3 1040 907 360716 4 906 11071
The total number of blocks, 1040 never got increased.
Even though majority of the index dead leaf blocks are reused for the new rows (SYEAR=2003),
I can see some rows 11,071 still in the deleted index leaf blocks. Originally the DEL_LF_ROWS has 116,545 rows.
Instead of drop and create, now I can rebuild the index online or "coalesce the index", thus avoiding maintenance window.
SQL> alter index tamil.t1_idx rebuild tablespace users nologging online ;
Index altered.
SQL> analyze index tamil.t1_idx validate structure ;
Index analyzed.
SQL> select height, blocks, lf_blks, lf_rows, br_blks, br_rows , del_lf_rows
from index_stats ;
HEIGHT BLOCKS LF_BLKS LF_ROWS BR_BLKS BR_ROWS DEL_LF_ROWS
------ -------- -------- --------- --------- ---------- -----------
3 1040 908 349645 3 907 0
From the above tests, it proves that deleted space is reused by Oracle.
3. Half Empty Leaf Blocks
What is half empty leaf blocks?
A leaf block was initially filled with index keys. After some time some of the keys got deleted because of deletion occurred at the table level. Now the block contains deleted keys as well as some undeleted keys. This block is said to be half empty leaf block.
According to B*tree Index definition Oracle has to coalesce the half empty leaf blocks, but it does not do when keys are deleted/added, thus leaving many blocks half filled.
To over come this fragmentation, Oracle introduced alter index coalesce command in 8i.
For example, SQL> alter index t1_idx coalesce;
However, index coalesce command works only with leaf blocks, it does not release branch blocks.
The next case study explains about half empty blocks:
I used the same T1 table for this case study also .
The ID col is populated with SEQ.
SQL> select syear, count(*) from t1 group by syear ;
SYEAR COUNT(*)
---------- ----------
2000 116545
2001 116550
2002 116545
2003 116555
Create a unique index on SYEAR + ID columns
SQL> create unique index t1_idx on t1(syear,id) tablespace users
storage (initial 1m next 1m pctincrease 0);
SQL> select segment_name, blocks, bytes from dba_segments where segment_name = 'T1_IDX';
SEGMENT_NAME BLOCKS BYTES
------------ ------- ----------
T1_IDX 1300 10649600
Out of 116545 rows I delete 50% of the rows for the year 2002.
SQL> delete t1 where syear = 2002 and mod(id,2) = 0 ;
58272 rows deleted.
SQL> commit;
Commit complete.
Get the object id from dba_objects.
SQL> select object_id from dba_objects where object_name = 'T1_IDX';
OBJECT_ID
----------
64579
To get the tree dump, run the following the command:
SQL> alter session set events 'immediate trace name treedump level 64579';
Session altered.
A trace file is generated in the udump dir. Sample output is given below:
leaf: 0xc002cc 12583628 (33: nrow: 378 rrow: 378)
leaf: 0xc002cd 12583629 (34: nrow: 378 rrow: 378)
leaf: 0xc002ce 12583630 (35: nrow: 378 rrow: 378)
leaf: 0xc002cf 12583631 (36: nrow: 378 rrow: 378)
leaf: 0xc002d0 12583632 (37: nrow: 378 rrow: 378)
leaf: 0xc002d1 12583633 (38: nrow: 378 rrow: 378)
leaf: 0xc002d2 12583634 (39: nrow: 378 rrow: 378)
leaf: 0xc002d3 12583635 (40: nrow: 378 rrow: 378)
leaf: 0xc002d4 12583636 (41: nrow: 378 rrow: 198)
leaf: 0xc002d5 12583637 (42: nrow: 378 rrow: 189)
leaf: 0xc002d6 12583638 (43: nrow: 378 rrow: 189)
leaf: 0xc002d7 12583639 (44: nrow: 378 rrow: 189)
leaf: 0xc002d8 12583640 (45: nrow: 378 rrow: 189) ----------------- Half empty blocks
leaf: 0xc002d9 12583641 (46: nrow: 378 rrow: 189)
leaf: 0xc002da 12583642 (47: nrow: 378 rrow: 189)
leaf: 0xc002db 12583643 (48: nrow: 378 rrow: 189)
Majority of the leaf blocks contain around 378 rows. The blocks that have deleted keys have 189 rows.
SQL> validate index t1_idx ;
SQL> select blocks, lf_rows, lf_blks, br_blks, del_lf_rows from index_stats;
BLOCKS LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS
------ -------- ----- ------- -----------
1300 466195 1216 4 58272
To merge the half empty blocks, run the coalesce command
SQL> alter index t1_idx coalesce;
Index altered.
Again run validate command.
SQL> validate index t1_idx;
Index analyzed.
SQL> select blocks, lf_rows, lf_blks, br_blks, del_lf_rows from index_stats;
BLOCKS LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS
------ ------- ------- ------- -----------
1300 407923 1072 4 0
After coalescing the deleted keys are gone. The number of leaf blocks is 1072, reduced from 1216.
Will the half empty blocks are reused?
The answer is YES and NO because the new keys must be placed in an appropriate blocks.
In the next test case, you will see oracle adds new leaf blocks even though some the blocks are half empty. Oracle will not
coalesce the leaf blocks when the new keys are added.
SQL> select segment_name, blocks, bytes
from dba_segments where segment_name='T1_IDX';
SEGMENT_NAM BLOCKS BYTES
----------- ---------- ----------
T1_IDX 1300 10649600
SQL> select syear, count(*) from t1 group by syear ;
SYEAR COUNT(*)
---------- ----------
2000 116545
2001 116550
2002 116555
2003 116555
SQL> delete t1 where syear = 2002 and mod(id,2) = 0 ;
58277 rows deleted.
SQL> commit;
Commit complete.
Now inserted 40000 new rows for the year 2004.
SQL> insert into t1 select t1_seq.nextval, 2004, a.* from dba_objects a ;
23311 rows created.
SQL> /
23311 rows created.
SQL> commit;
Commit complete.
SQL> select segment_name, blocks, bytes from dba_segments where segment_name='T1_IDX';
SEGMENT_NAM BLOCKS BYTES
------------ ------ ----------
T1_IDX 1430 11714560
Oracle added 130 new blocks so the total went upto 1430.
Summary
In general Oracles B*Tree Structure is well maintained except in few occasions. You will not notice a problem in an OLTP application in which massive delete/update may not happen. However, in a DSS or DW systems, you will see slight performance degradation after massive delete/update.
Again, I welcome all of your comments.
Tamil
A reader, February 25, 2005 - 3:56 pm UTC
Excellent points Moorthy. Very very relevent to the topic.
Is there any url where I can read more on these Einstein quotes? Could you please post it if you don't mind.
Quality Control.
Jonathan Lewis, February 25, 2005 - 4:30 pm UTC
I'd like to answer the response from Don Burleson that started:
"I guess you know something that MetaLink does not:".
Don's original article states:
Number 4: Table updates will run faster after reorganizations when the table has unbalanced freelists (with dictionary-managed tablespaces only). It is unclear
if bitmap freelists (automatic segment space management) will benefit.
Tom points out that (allowing for the fact that Don probably meant inserts rather than updates, and ignoring the fact that Don seems to be confusing LMTs with ASSM), that the effect of multiple freelists would be to use extra space, but would have no impact on insert performance. Specifically Tom states; "It could affect space utilization, but not performance.".
Don's reponse (presumably intended to be sarcastic) is that Tom knows more than Metalink because it says on metalink:
"multiple free lists may cause some empty blocks to go unused, "
and
"The primary cost with using multiple process free lists or multiple free list groups is increased space usage"
This, of course, is exactly what Tom said. And neither of the quoted comments says anything about the performance issue that Don originally claimed.
In passing - given that Metalink is a set of texts generated by people who work for Oracle Corporation, and Tom is a person who works for Oracle corporation, if the two sources disagree how do you decide which one is more likely to be right ?
Secondly:
Don complains that Tom splits hairs about the use of the word 'precept' when Tom shouldn't do this because he is the master of 'proving the benefit'.
But Don's original article (paraphrased) says: please write a progam for me that tells me what to re-organize - and you'll know what to re-organize by following these "precepts".
Tom, as the master of 'proving the benefit', has proved that the precepts are naive and simplistic, and that anyone who spends the effort writing a progan against them is almost certainly wasting their time.
Proving that something is a misguided waste of effort is one of the best proofs you can give - so long as it's before the event - especially in the arena of Oracle development.
Finally, we get a simple question. We will ignore Don's attempt to imply that Tom is a devious character, with verbal diarrhoea who can't give a straight answer:
Based on the statements:
a) Rebuilding tables and indexes can sometimes improve SQL performance. (which I also believe to be true - especially the 'sometimes' bit).
b) You can get details about the structure of the table/index (which is also true, although sometimes expensive)
c) AWR has a historical record of exactly how tables/indexes are accessed by SQL (which is almost true, and if you have the licence you are allowed to look at the tables, and we won't mention the cost of keeping them well-populated)
To quote Don: 'Could a "proof" be designed to predict (a priori)the reduction in I/O'.
In principle, I believe it could be possible to create an expert system, or AI program, that could identify a few objects that were likely to be good candidates for rebuilding.
But to identify a relatively worthwhile, low-risk, small set of objects, it would have to be based on a lot more "precepts" than the small number of simplistic, (and incorrect) ones suggested in the original article.
urls
Moorthy Rekapalli, February 25, 2005 - 4:34 pm UTC
In defense of the Don: Index reorgs
rajXesh, February 25, 2005 - 4:40 pm UTC
SQL*Plus: Release 9.2.0.1.0 - Production on Fri Feb 25 14:58:07 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.6.0 - Production
SQL> create table t1 ( x char(30) not null );
Table created.
SQL> create index t1_idx on t1(x);
Index created.
SQL> create table t2 ( x char(30) not null );
Table created.
SQL> create index t2_idx on t2(x);
Index created.
<b>
The abovementioned add rows procedure
</b>
SQL> create or replace procedure add_rows( p_n in number )
2 as
3 l_data char(20);
4 begin
5 for i in 1 .. p_n
6 loop
7 l_data := trunc(dbms_random.value(1,500000));
8 insert into t1 values ( l_data );
9 insert into t2 values ( l_data );
10 commit;
11 end loop;
12 end;
13 /
Procedure created.
<b>
Create a delete rows procedure that randomly tries to delete n rows
</b>
SQL> create or replace procedure del_rows( p_n in number )
2 as
3 l_data char(20);
4 begin
5 for i in 1 .. p_n
6 loop
7 l_data := trunc(dbms_random.value(1,500000));
8 DELETE FROM t1 WHERE t1.x = l_data;
9 DELETE FROM t2 WHERE t2.x = l_data;
10 commit;
11 end loop;
12 end;
13 /
Procedure created.
<b>
Initial load of the tables
</b>
SQL> exec add_rows(&1)
Enter value for 1: 100000
PL/SQL procedure successfully completed.
<b>
Randomly delete ~10% of the rows and then add 10% back.
</b>
SQL> exec del_rows(&1)
Enter value for 1: 10000
PL/SQL procedure successfully completed.
SQL> exec add_rows(&1)
Enter value for 1: 10000
PL/SQL procedure successfully completed.
<b>
Rebuild
</b>
SQL> alter index t1_idx rebuild;
Index altered.
<b>
Now runstat the dang thing
</b>
SQL> exec runstats_pkg.rs_start;
PL/SQL procedure successfully completed.
SQL> ed
Wrote file afiedt.buf
1 declare
2 v t1.x%type;
3 begin
4 for i in 1000..2000 loop
5 begin
6 select /*+ index_ffs(t1 t1_idx)*/ x into v from t1 where x = i;
7 exception
8 when others then
9 null;
10 end;
11 end loop;
12* end;
SQL> /
PL/SQL procedure successfully completed.
SQL> exec runstats_pkg.rs_middle
PL/SQL procedure successfully completed.
SQL> ed
Wrote file afiedt.buf
1 declare
2 v t2.x%type;
3 begin
4 for i in 1000..2000 loop
5 begin
6 select /*+ index_ffs(t2 t2_idx)*/ x into v from t2 where x = i;
7 exception
8 when others then
9 null;
10 end;
11 end loop;
12* end;
SQL> /
PL/SQL procedure successfully completed.
SQL> exec runstats_pkg.rs_stop(500)
Run1 ran in 12719 hsecs
Run2 ran in 12870 hsecs
run 1 ran in 98.83% of the time
Name Run1 Run2 Diff
LATCH.row cache objects 2,810 3,326 516
LATCH.multiblock read objects 506 1,220 714
LATCH.library cache pin alloca 4,389 5,760 1,371
LATCH.library cache pin 9,600 11,562 1,962
LATCH.shared pool 8,182 10,474 2,292
LATCH.cache buffers lru chain 1,699 4,454 2,755
LATCH.library cache 14,991 18,853 3,862
STAT...calls to get snapshot s 10,943 14,939 3,996
LATCH.session idle bit 18,569 32,804 14,235
LATCH.simulator hash latch 40,290 58,361 18,071
STAT...no work - consistent re 626,783 830,442 203,659
STAT...consistent gets 636,729 844,389 207,660
STAT...session logical reads 636,785 844,455 207,670
LATCH.cache buffers chains 1,294,860 1,895,285 600,425
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,410,452 2,058,519 648,067 68.52%
PL/SQL procedure successfully completed.
SQL>
SQL>
Yes I agree that the INSERT will be slower due to the index 'fattening up', but look at the difference in the Latches during SELECT.
Looking at this evidence I would say that an index rebuild will definitely improve scalability. Generally speaking, in most databases you 'insert' once and 'select' multiple times (with a few updates and deletes thrown in for good measure). The number of select's is huge as compared to the number if Inserts. Periodic Index rebuild helps the select performance and overall scalability of the system by reducing the latches. The Inserts/deletes will bloat up the index and slow down the selects, so to counter that we rebuild periodically.
I dont much care if there is a method to determine an 'optimal' time to rebuild the index. Every database has a maintenance window, just choose one. Do it say, once a year. Once every 6 months. Whatever works. And why should I care it a very small minority of the users (those who will do mass bulk inserts) will notice some slowdown. Its not as if they are sitting at the terminal waiting for this huge 100000 row insert to finish. Most of these operations are batch operations that will take place during a batch cycle window.
Ah, but the users who will notice are the vast majority who will be querying the database. Their selects will definitely be faster!
February 25, 2005 - 7:01 pm UTC
I did not say anything different.
I asked simply "what is it that drives my business here"
IF I am in the business of booking orders
AND I write a report that runs lots at night
THEN
perhaps booking orders is what I want to optimize, the reports are run
in isolation
end if
The point being -- it is obvious that if the index is smaller, a fast full scan will be faster (duh is a term i associated with that after a while)
However, how long between rebuilds? 5 minutes? 10 minutes? 2 days? and to what end -- to kill the inserts again.
Indexes are many times like people, we have a weight we gravitate towards.
See the Einstein quote number 3 above, it is interesting.
How about the link:
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6601312252730 <code>
<quote>
Once a month the indexes are rebuit. (Alter index rebuild)
On the day following the indexes rebuild 50 logs are created
On the following days 45...40...35...30....25....down to 10 at 10 logs this
remains constant at 10 Mining the logs we see that we have increase INTERNAL
INDEX UPDATES
</quote>
Hmmm, so, looks like the index rebuilds really did something here -- the index sizes must have gone down CONSIDERABLY (how else could you explain the 450% increase in redo generation? they must have shrunk a ton since they
had to grow by leaps and bounds in the ensuing days)...
I've never said "don't rebuild"
I've said "please understand why you are doing it and what it is doing to you -- good and bad"
And perhaps you can actually set it up such that you NEED NOT rebuild -- we can predict when these things will break down -- and if we can do that, many times you can design around it.....
Else it is Sonny and Cher "I got you babe" over and over....
Most indexes that would be picked up to be rebuilt on these metrics would be the type demonstrated above -- randomly inserted rows with updates that move values around (like an index on a name or some other randomly arriving data). No matter how frequently you rebuild they just get fat again.
Extremes are easy
Jonathan Lewis, February 25, 2005 - 6:31 pm UTC
The example above that uses an index_ffs is about the only example ever quoted to demonstrate that there is a benefit to rebuilding indexes. And it is true - if you are going to run lots of queries that do most of their work in the index (large range scans on badly designed indexes will do the same) then you can usually demonstrate a visible benefit - which might be worth the cost and risk of a rebuild.
But think about this - rather than rebuild your index, why not rewrite your query as:
select /*+ index(t2 t2_idx)*/ x
into v from t2 where x = rpad(i,30);
I got 1/15 of the latching when I did that and the query completed in 1/12 of the time.
Or how about simply dropping the index after the high-precision bit of the load (the deletes). I got an 18% reduction in latching by doing that (but a 5% increase in CPU - which do you think is the better side of the trade ?)
The criticism is not that Don's suggestions are 100% wrong all of the time - after all, Tom seems to have agreed with some of Don's comments.
You've provided an extreme example - where the unusual nature of the processing should have dictated some special consideration of mechanisms BEFORE the design went live. Would you really be happy if someone gave you the "super" tool that told you this index should be rebuilt regularly ? I wouldn't.
February 25, 2005 - 7:19 pm UTC
I did agree with some of the points, I just pointed in that in some cases "the truth, the WHOLE truth, and nothing but the truth" was not there -- emphasis on WHOLE truth for some...
Coalesce/Rebuild indexes
Tamil, Atlanta USA, February 25, 2005 - 7:31 pm UTC
Why does Oracle read /store dead leaf blocks in the SGA during the index range scan?
Can any one explain?
Tamil
February 25, 2005 - 7:51 pm UTC
because there is not such a thing as a dead leaf block?
leaf blocks with zero entries generally get "unlinked" and onto the freelist.
leaf blocks with at least one entry *cannot*
but even one with no entries that is still in the structure has the "prev" and "next" pointers in it.
when you do:
select * from t where x between 42 and 55;
Oracle will:
root block -> branch block -> branch or leaf, depending on height
eventually, it gets from branch to a LEAF. Each leaf is linked to the previous and next leaf in the structure so to go on a scan we never need go up and down and up and down -- we just go left and right.
There is no such thing as a "dead leaf block"
And it would not be normal to have a totally empty leaf block in the structure -- a leaf with 1 row, yes.
One slight correction :-)
Connor, February 26, 2005 - 7:19 am UTC
"I use metalink (well, an internal version of it, it is in black and white and uses courier font use alot)"
Try logging a tar in metalink and cut-pasting some output from (say) SQL Plus, or a trace file. Or open up the forums, and look at the postings in there...they give no font information, so assume the default font of the browser (which for most browsers, IE, Mozilla, Firefox etc is a proportional spaced font). Similarly, leading spaces etc etc all gone...
How hard can it possibly be for forum and tar postings to have "<pre>" in them ?
:-(
February 26, 2005 - 8:03 am UTC
that is not a correction -- that is an enhancement request.
The internal site I use uses black and white and courier.
Predictive modeling in 10g r2?
Donald K. Burleson, February 26, 2005 - 9:12 am UTC
Hi Tom,
This will be my last post here because I will not participate in any forum where people will not discuss technical issues without resorting to personal insults.
IMHO, its very immature and unprofessional to insult people, and I dont understand how my dress code and rude comments from your minions has any value in a legitimate technical discussion. There is a far more polite discussion of this issue in the Oracle DBA forum:
</code>
http://dba.ipbhost.com/index.php?showtopic=1239
BTW, it appears that you missed the RMOUG presentation on 10g r2 where the presenter published that the use of "baselines to capture and adapt thresholds to expected time-dependent workload variations" is an important new feature of the next release of Oracle.
http://www.dba-oracle.com/oracle_tips_predictive_modeling_10g.htm <code>
Also, your comment below does not make sense to me. Predicting the future without historical justifications is the realm of psychics, not scientists:
"Lets build the predictave (sic) model that predicts the future, rather than analyzes the past!!!"
Have fun insulting me. . .
It's far easier than addressing the technical issues. . .
February 26, 2005 - 10:00 am UTC
your "nice" link where people are being polite to you (on your site)
<quote>
Sorry, an error occurred. If you are unsure on how to use a feature, or don't know why you got this error message, try looking through the help files for more information.
The error returned was:
Sorry, you are not permitted to use this board
</quote>
anyway.
Don -- Please - I'm begging you here - point out any insults that I've made here? Truthfully -- I see none. I cannot, will not, do not control what others say (and I don't create my own reviews here, you'll just have to trust me on that, but really I don't)
I did see a bunch of technically inaccurate -- half of the story "precepts". I tried to technically point them out and correct them. I've supported my discussion with facts, proofs. I think I've taken the 4 "precepts" and shown them to either
a) tell only part of the story
b) be not accurate in their depiction of how the software actually *works*
It is funny however that rather than correct me -- show me that your 4 precepts are sound, you pick one ('unbalanced freelists') and say "ah hah, you are wrong". I said "ok, even if it is wrong, that they can become wildly unbalanced, please describe how this affects *performance* and not space utilization?
You chose not to address any of the technical issues, only to try and make light of me. You countered nothing with fact, only with inuendo. You provided supporting material from metalink, that only confirms what I've stated.
I'm not getting it. But I have seen the pattern, here is my predictive model on a different topic:
a) observe someone saying "sorry, that is not accurate and here is why"
b) observe who they are saying it to
c) if that person they are saying it to is "....."
d) then conclude you've just insulted that persons integrity, honesty and you are not a professional apparently.
I've been to this play before, I tried to do it privately in email, but hey - it wasn't worth the effort, didn't get the material corrected (entirely). It did get some interesting "meet me on stage" type challenges to a hugely increased email distribution list, but that is another story I guess.
I find this "incorrect data reorg" I'm trying to do less useful in general (I'm trying to reorg the incorrect stuff over and over again), rather fix the cause, but in this case I guess my hands are tied.
I've long ago come to the conclusion that yes, predictive modeling is in fact not only possible -- but technical people have in fact been doing it for years -- only BEFORE (hence predictive) instead of AFTER (hence reactive).
These are *not* my minions Don, these are people saying what people will say. I guess this is why you do not participate in the newsgroups anymore as that is an open forum where people say whatever they want to say -- sort of like in real life.
As for your closing statements there about predicting the future.....
I choose to predict the future based on the things such as "how you use the data". And I can, you could, others do. We take the information about how things actually work and predict the future based on that, test our hypothesis, run simulations, see it work or not and then field test it (because even the best theories are useless if they do not hold up in practice)
You cannot be predicting the future using AWR there - you are simply looking at the past and trying to produce a list of objects to reorg based on their level of "already brokeness".
I'll predict that something will break. And how to make it so it won't in fact break.
You are 'predicting' that something is in fact *already* broken. And not suggesting we fix it, but rather just blindly reorg it over and over and over (groundhog day...)
And again, please - begging you here - where did I insult you?
I did ask some questions -- never been addressed, but insult you, only if pointing out that the 4 precepts are not the best foundation as they
a) tell part of the story
b) are technically inaccurate for the most part
is considered insulting, so be it.
precept 1: half of the story and probably the wrong half (I'd probably be asking "why is that index fast full scan so popular in your system" rather than "lets make it go faster at the expense of everyone else"
precept 2: not even half of the story and a rebuild is exceedingly difficult (list the steps in 10gr1 and before please that rebuilds a table "sorted" - all of the steps please. and describe how to do it "online" as a bonus)
precept 3: false on chained rows, false on migrated rows and really begs the question on "low density" as to HOW you got there, rather than just fix it.
precept 4: false on the unbalanced freelists, false on will run faster after rebuilding, false on the misleading reference to dictionary managed tablespace, totally wrong in its understanding of ASSM all together.
If you really think any of the "above false's" should be truths -- well, prove me wrong I guess.
dead leaf block
Tamilselvan, February 26, 2005 - 9:40 am UTC
Tom, Thanks for your quick reply.
<quote>
because there is not such a thing as a dead leaf block?
leaf blocks with zero entries generally get "unlinked" and onto the freelist.
leaf blocks with at least one entry *cannot*
but even one with no entries that is still in the structure has the "prev" and
"next" pointers in it.
</quote>
This is where oracle's index technique cripples. I used a term "dead leaf block" that menas zero entries in leaf block. Oracle does not coalesce or modify the "prev" and "next" pointers correctly after delete.
Pl See my earlier post. Is it common that "leaf blocks that have zero entries" are found in the buffer cache?
Tamil
February 26, 2005 - 10:09 am UTC
generally we do, where do you see otherwise?
most all of the time it will, it just happens. Else, a sweeping index would just keep growing (insert monotomically increasing numbers, delete low numbers -- as it sweeps right to left, it does in fact unlink entirely empty blocks)
see
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2913600659112#24034747883219 <code>
the index maintenance routines are actually very sophisticated, and tend to unlink entirely empty blocks.
can you show me a case where your buffer cache is littered with empty index leaf blocks?
Empty Leaf blocks
Jonathan Lewis, February 26, 2005 - 12:04 pm UTC
Although Oracles does re-use empty leaf blocks, there can be a time-lag between emptying and re-using.
As Tom points out, the empty blocks are both 'in the index structure' and 'on the free list'. They will only be taken out of the structure when they are also taken off the freelist, i.e. when a new index leaf block is needed to accept some new rows.
Note - when the block goes off the free list, it could be used almost anywhere in the index structure - it could even become a branch block - it doesn't have to be used as a leaf block that holds rows that have "similar" values to the values it used to hold.
So the previous poster's demonstration was predicatble - it caught Oracle between a bulk delete and any subsequent inserts that might move and re-use the blocks.
This supports the argument for deliberately designed rebuilds in special cases, of course. If you are expecting to do a massive delete and never do any further inserts, then an index rebuild is quite likely to be a good thing. (Of course, marking the index unusable before the delete, and rebuilding it after the delete might be even more appropriate).
February 26, 2005 - 1:46 pm UTC
exactly -- we are able to actually predicate based on how you plan on using the data what will happen -- before it happens.
No Title...
Michael Cunningham, February 26, 2005 - 1:41 pm UTC
Well . . . I for one would like to win the contest. I may not have the DBA knowledge I'd like, but why should that prevent me, or anyone else for that matter, from trying to solve the issue? The need for a predictive model does have usefulness. Heck, if systems were designed correctly from the get-go many of us would be out of our jobs.
If past behavior is a predictor of future behavior (and I believe many scientists depend on this) why could I not predict that a reorg would improve performance? Even if I only got 80% of - and I believe the % would be higher - solution I'd benefit greatly. With that why would I not be able to determine, from past historical data, how a table/index is being accessed, the amount of blocks that were being read in the past, the amount now, the size of the data in the blocks, etc., do some calculations, and then predict that a table/index reorg will "likely" provide a benefit of X number of less blocks being read? Better yet, wouldn't it even be possible from analysis to make a suggestion about a size for pctfree and pctused in an attempt to "fix" the problem from reoccurring?
It sounds like a scientific problem that can be addressed through scientific means. I have a source code control system that chains rows in one of the tables. After complaints from the development staff and looking into the problem I did reorg to fix the chaining. A 20 second process went to 2 or 3 seconds. The reorg did exactly what I predicted. Now, it may happen again in the future, but . . . can it be predicted ahead of time. Or, even provide me with a warning?
I'm learning more everyday from this site and others and maybe some day I will know enough to fix it once and have it never cause a problem again. Until then it would be great to have something to identify where a reorg will give me a calculated performance improvement, how much of an improvement, and tell me why. I'd venture to guess the seasoned experts would benefit nicely as well. A big problem? Yes. To fix big problems it often is more productive to break them into smaller solvable pieces. To me it sounds like a solvable dilemna. Now it just has to be proven...
February 26, 2005 - 3:00 pm UTC
First, I was commenting on the precepts -- they are not the right ones.
Second, we can in fact predict this stuff -- without waiting for it to break! We don't need to wait for a structure to break apart before knowing it will break apart!
All we need know is how the structure is used.
Take the
* Multi-block Index range scans will run far faster when the data blocks are
arranged in index-key order (as evidenced by clustering_factor in dba_indexes).
precept. ABSOLUTELY true (I've written chapters on this in Expert one on one Oracle and Effective Oracle by Design). Now, what can we do about it? And better yet how do you fix it?
a) reorg the structure every week/month over and over and over and over again....
b) recognize the futility in that (stuff works good for a day, but breaks down again) or ..... implement the proper structure that will enforce as data is added this "locale" of information. You see, I do not want to predicte (actually, I'm going to stop saying predict because I keep writing predicate and because it is not predicting -- it is realizing "houston we have a problem here and our structure has (past tense) broken down").
You see I do not want to have to keep fixing the thing, I want to understand how the data is used and then employ the proper structure (this is what is missing, this is not about "when to reorg" this is about "stop reorging and start living man, we do not need to keep going to this play over and over again")
Here is a "for example" fix to
* Multi-block Index range scans will run far faster when the data blocks are
arranged in index-key order (as evidenced by clustering_factor in dba_indexes).
because you have to remember -- how many ways can a table be sorted? (another option could be a b*tree or hash cluster as well)
<quote Effective Oracle by Design>
Use IOTs to Colocate Randomly Inserted Data
In addition to being a space-saving device, by obviating the need for some redundant tables, IOTs excel in their ability to physically colocate related information for fast access. If you recall, one of the downsides to a cluster is that you need to have some control over the arrival of the data in order to optimize the physical colocation of the data. IOTs do not suffer from this condition, because they will readjust themselves structurally in order to accommodate the data as it is inserted.
Consider an application that frequently retrieves a list of documents owned by a given user. In the real world, the user would not insert all of the documents he or she will ever own in a single session. This is a dynamic list of unpredictable size that will constantly be changing as the user adds and removes documents in the system. Therefore, in a traditional heap table, the rows that represent this user's documents would be scattered all over the place. Consider what would happen when you run a query such as this:
select * from document table where username = :bind_variable
Oracle would use an index to read many dozens of blocks from all over the table. If we used an IOT to physically cluster the data together, this would not happen. We can observe this behavior with a simple simulation and AUTOTRACE. For this example, we'll set up a pair of tables: one using an IOT and the other using a heap-based implementation.
ops$tkyte@ORA920> create table iot
2 ( username varchar2(30),
3 document_name varchar2(30),
4 other_data char(100),
5 constraint iot_pk
6 primary key (username,document_name)
7 )
8 organization index
9 /
Table created.
ops$tkyte@ORA920> create table heap
2 ( username varchar2(30),
3 document_name varchar2(30),
4 other_data char(100),
5 constraint heap_pk
6 primary key (username,document_name)
7 )
8 /
Table created.
We use a CHAR(100) just to make the average width of a row in these tables about 130 bytes or so, since a CHAR(100) will always consume 100 characters of storage (it is a fixed-width datatype). So, the only difference between these two tables (besides their names) is the addition of the ORGANIZATION INDEX clause on the second one. That instructs Oracle to store the table data in an index segment instead of a table segment, so that all of the data for that table will be stored in an index structure.
Next, we populate these tables with some sample data. We construct a loop that adds 100 documents for each user in the ALL_USERS table. We do this in a fashion that emulates real life, in that the documents for a given user are added not all at once, but rather over time after many other documents have been added by other users.
ops$tkyte@ORA920> begin
2 for i in 1 .. 100
3 loop
4 for x in ( select username
5 from all_users )
6 loop
7 insert into heap
8 (username,document_name,other_data)
9 values
10 ( x.username, x.username || '_' || i, 'x' );
11
12 insert into iot
13 (username,document_name,other_data)
14 values
15 ( x.username, x.username || '_' || i, 'x' );
16 end loop;
17 end loop;
18 commit;
19 end;
20 /
PL/SQL procedure successfully completed.
And now we are ready for our performance comparison. Here, we will read all of the data from our table, user by user; that is, for USER1, we'll read out all of the rows that correspond to that user, and then for USER2, and so on. Additionally, we'll do this reading in two ways-one time using BULK COLLECT and the next using single-row fetches-just to see the difference that array processing can have on performance and scalability, and to show how significantly different this IOT can be from a heap table. Our benchmark routine is as follows:
<b>NOTE: in 10g, plsql silently array fetches 100 rows at a time, so this example will show similar performance for the two. A silent bonus of 10g plsql</b>
ops$tkyte@ORA920> alter session set sql_trace=true;
Session altered.
ops$tkyte@ORA920> declare
2 type array is table of varchar2(100);
3 l_array1 array;
4 l_array2 array;
5 l_array3 array;
6 begin
7 for i in 1 .. 10
8 loop
9 for x in (select username from all_users)
10 loop
11 for y in ( select * from heap single_row
12 where username = x.username )
13 loop
14 null;
15 end loop;
16 for y in ( select * from iot single_row
17 where username = x.username )
18 loop
19 null;
20 end loop;
21 select * bulk collect
22 into l_array1, l_array2, l_array2
23 from heap bulk_collect
24 where username = x.username;
25 select * bulk collect
26 into l_array1, l_array2, l_array2
27 from iot bulk_collect
28 where username = x.username;
29 end loop;
30 end loop;
31 end;
32 /
PL/SQL procedure successfully completed.
The TKPROF report for the single-row fetches shows the following statistics:
select * from heap single_row where username = :b1
call count cpu elapsed disk query current rows
------- ------ ----- ------- ---- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 440 0.05 0.05 0 0 0 0
Fetch 44440 1.50 1.42 0 88886 0 44000
------- ------ ----- ------- ---- ------- ------- -------
total 44881 1.56 1.48 0 88886 0 44000
Rows Row Source Operation
------- ---------------------------------------------------
44000 TABLE ACCESS BY INDEX ROWID HEAP
44000 INDEX RANGE SCAN HEAP_PK (object id 43271)
***********************************************************
select * from iot single_row where username = :b1
call count cpu elapsed disk query current rows
------- ------ ----- ------- ---- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 440 0.07 0.05 0 0 0 0
Fetch 44440 1.11 0.99 0 44987 0 44000
------- ------ ----- ------- ---- ------- ------- -------
total 44881 1.18 1.04 0 44987 0 44000
Rows Row Source Operation
------- ---------------------------------------------------
44000 INDEX RANGE SCAN IOT_PK (object id 43273)
The heap table, with its read-the-index-then-the-table approach, will do at least twice the I/O row by row by row. It must read the index block, and then read the table block. The IOT approach, on the other hand, simply reads the index and is finished. So, this is pretty good. Any day we can cut in half the number of I/Os our system must perform is a good day. Can it be even better? Yes, it can.
In Chapter 2, we used AUTOTRACE to demonstrate the effect that different array sizes (how many rows Oracle fetches in response to each fetch request) could have on an application. We saw that we can significantly reduce the I/O performed by a query if we fetch many rows at a time instead of fetching single rows. Here, we can really show the power of the IOT over the heap table in this regard. Reading further on in the TKPROF report we see this:
SELECT * from heap bulk_collect where username = :b1
call count cpu elapsed disk query current rows
------- ------ ----- ------- ---- ------- ------- --------
Parse 1 0.00 0.00 0 0 0 0
Execute 440 0.06 0.05 0 0 0 0
Fetch 440 0.49 0.48 0 36100 0 44000
------- ------ ----- ------- ---- ------- ------- --------
total 881 0.55 0.54 0 36100 0 44000
Rows Row Source Operation
------- ---------------------------------------------------
44000 TABLE ACCESS BY INDEX ROWID HEAP
44000 INDEX RANGE SCAN HEAP_PK (object id 43271)
***********************************************************
SELECT * from iot bulk_collect where username = :b1
call count cpu elapsed disk query current rows
------- ------ ----- ------- ---- ------- ------- -------
Parse 1 0.00 0.00 0 0 0 0
Execute 440 0.06 0.05 0 0 0 0
Fetch 440 0.24 0.24 0 2110 0 44000
------- ------ ----- ------- ---- ------- ------- -------
total 881 0.31 0.30 0 2110 0 44000
Rows Row Source Operation
------- ---------------------------------------------------
44000 INDEX RANGE SCAN IOT_PK (object id 43273)
The IOT using bulk fetches did less than 6% of the I/O of the query against the heap table using the same bulk collect. When running this test with Runstats instead of TKPROF, I discovered the heap table approach used 600% more latches. These are some serious differences here!
</quote>
So, do you still want to reorg tables? (please anyone -- list out the steps to reorg a table "sorted" in 10gr1 and before and do it online)
We can predict based on your usage pattern of the structure -- before you use it -- what issues will arise. That is were we humans can focus our energies -- because the database is getting smarter at realizing our mistakes and doing the best it can to help us out.
But remember, the database will NEVER be able to turn:
for x in ( select * from t1 )
loop
insert into t2 values ( x.a, x.b, x.c, ... );
end loop;
into
insert into t2 (a,b,c,...) select a,b,c,... from t1;
That is -- the database will never be able to fix a bad implementation, it can only try to do it as fast as it can.
<quote>
I have a source code control system that chains rows in one of the
tables. After complaints from the development staff and looking into the
problem I did reorg to fix the chaining. A 20 second process went to 2 or 3
seconds. The reorg did exactly what I predicted. Now, it may happen again in
the future, but . . . can it be predicted ahead of time. Or, even provide me
with a warning?
</quote>
can you predict that? absolutely. Do you insert rows with lots of missing (null) or short fields? Do you update them over time? Did you leave the default 10% free?
You will migrate the rows.
Then, do you frequently access this object ALOT by plans similar to this:
table access by index rowid
index range or unique scan
If so, the migration of rows (note: not a full scan, but rather the number of table access by index rowids is what will be an issue here -- that was another point that the precept was wrong on) should be anticipated -- to the point that you can either
a) upon table creation say "look the row starts life small, but ends life at about 1,000 bytes. I have an 8k blocksize. So I want at most 7 or 8 rows per block. Pctfree should be really high to accomidate this (more than the default 10%)
b) figure out the say 7/8 and use the miminize rows per block above
I would not want to win the contest based on the above 4 precepts, no, still not. I'd rather expend energy helping you figure out how never to get into the situations where a reorganization would be called for (the software is helping you there already over time).
So you see, I find the above 4 precepts as rather dangerous -- too narrow and not really accurate.
It is true that index fast full scans will run faster against 500 blocks versus 1000. But -- where was the discussion of what will not run faster? And for how long does this fix apply? How long does it run faster? This precept needs much fleshing out.
Multi-block index range scans -- still like the list of things to do to "fix that", it is not a reorganziation in the current sense of the word.
Last table full-table scans will run faster after un-chaining and un-migrating. They will not.
Table updates being affected by "unbalanced freelists" performance wise?
I only hope you don't start with these 4 "laws"
But if you do gain an understanding of the structures available to you -- beyond the HEAP table and the B*TREE index -- you'll find many ways to avoid many of these issues and to be able to recognize that something might be unavoidable (but then again, 10g r1 will catch many things for you -- like the "low density table" and offer to shrink it -- the table. That in fact, the shrink, was one of my favorite top 10 things about 10g. If you cannot use partitioning to solve your purge problem, at least we do have that, but the database actually tells us about it)
My entire initial point was "I have problems with these precepts", and I still do.
fascinating but "a tad" difficult problem
Alberto Dell'Era, February 26, 2005 - 2:55 pm UTC
Well, a mathematical model of Oracle already exists - it's the one used by the CBO. So in theory, one could think to feed it with the (calculated) statistics "after the reorg" and be able to predict the new behaviour. Then compare with the current behaviour.
May you be able to come up with a better model than the CBO one ? Good luck - Oracle has invested (and is investing) million of dollars in it, and most importantly, has perfected it after getting actual experience from real cases; still, it has some limitations. An hard task indeed.
Now, assuming that the CBO model were exact (and accessible) - how are you going to calculate the new statistics ? You need another mathematical model, the structural one, that fed with the "rows in the table" will come up with the new structure of the table (and/or indexes) on which to calculate the statistics - you need very intimate knowledge with the data structures, which are very complex (especially the indexes) ... so good luck here too.
And, since it's not a great deal to be able to predict the behaviour
1 picosecond after the reorg, only to have a completely different behaviour after 1 day, you will need to model also the modifications to the table (how the apps are going to update/insert etc the rows, which rows, etc), so you will have to model that (the statistical load) to build a predictor. Good luck too - how many insert/update/delete patters we have - each of them will likely need its own statistical model ...
So you know, being a Mathematician myself, i've been fascinated with this problem in the past, and concluded that it would take quite a bit of time.
I will check the winner of the context in my next life ;)
David Aldridge, February 26, 2005 - 7:30 pm UTC
This is a paraphrase of some thoughts I expressed on HJR's forum here ... </code>
http://www.dizwell.com/BB/viewtopic.php?p=133#133 <code>
Supposing as part of your hiring process for an application developer you required applicants to answer the following question, in (pen and paper) written form: "Explain the complexity of designing a program for the optimization of indexing and table structuring based on inputs of query and query performance history: 4 hours".
Now there would be a test that I'd love to see. IMHO it would be a great way of testing both the depths of a candidates knowledge on Oracle and their ability to clearly plan and communicate their thoughts in a structured document.
In an interview I might currently ask a question along the lines of "please describe the origin of an indexes clustering factor, and the way that it is used by the cost-based optimizer", but by asking a big question like the one in my four-hour-essay example you can cover a lot of ground. Well, it's just a pipe-dream I guess, but by turning the assumption in Don's question around I think you have a great basis for some stimulating discussion. It's stimulating me, at any rate. :)
Wendy, February 27, 2005 - 7:53 am UTC
I was shocked to discover what a serial offender this guy is and how much he writes. Just perform a search on Burleson here on Ask Tom and every hit refers to similar mistakes. A google search reveals even more examples going back years.
Is there anyone else who writes so much stuff on things they obviously know so little about? It's remarkable how he continually publicly embarrasses himself by confirming his ignorance to everyone with what he writes. A few days ago we had no idea he was ignorant on how freelists work, now we all know.
And when people dare point out his mistakes, he appears to just complain and run off. It's bizarre.
Tom, you have acted professionally and you've insulted no one. This Burleson has again written BS, been put to task on it, complains without even attempting to explain how freelists he claims to see unbalanced all the time actually get unbalanced or affect performance and runs off again.
Now, that's what I call unprofessional and insulting. Has anyone considered Burleson might be working for a competitor and is deliberately writing trash. No one can consistently be this naive can they?
My thoughts on thsi thread...
Robert Freeman, February 27, 2005 - 11:09 am UTC
I've been following this thread with some interest. It seems to me that sometimes, reality can get lost in the luster of experiment and, shall we say, a little spirited debate.
To me, there are two very different issues at heart here.
I think Tom is right on (and I think Don would agree) that it all begins with the design. Design the database right and hopefully one will find that we don't need to deal with these issues. Where I think we have missed the boat in this discussion is that t, often, the DBA is the last to get included in the loop in many places. Also, even when the DBA is included at some point, the mix of experience , understanding and other factors can result in a less than optimal solution. We can (and should) talk all day about the right way to do things, but we must also accept the reality
that in many cases the optimal solution was not taken.
I also agree that reorganizing for the sake of reorganizing is a dangerous thing to be doing. Ive been at more than one client that had scheduled reorganizations of tables that execute almost weekly, and that gives me the willies. Index reorganizations are a bit less of a problem, but they do consume resources.
Ive seen index reorganizations that have improved performance to incredible degrees for a long period of time. Was the application design optimal, nope. Was the database design optimal, certainly not. Did the client have the million plus bucks to go back and redesign everything, no way.
In my own work I have found that companies don't often think to include experienced DBA's during the beginning and design from the ground up. Even when I am included from the ground up, I'm subject to business rules, the whims of management and their desire to produce an application that is everything to everyone (you have not lived until you have been asked to provide all possible permutations of point to point prices for over 5000 locations on a moments notice (in V7.3!) to an application).
The poor performance of some Oracle apps modules for YEARS provides some proof that even the experts don't get it right the first time, or the second, or the third.... That brings up the issue of third party vendors who design these things, how can I be responsible for those designs and how can I fix them and still be supported?
More often than not, experienced DBA's come in to a client at the point of disaster, of failure, and need tools that help them to determine what is causing this failure. We come on-site and are presented with this load of work. How do you decide what takes priority? Lots of times, I've been called in to implement a model thats already been designed, and there isn't the budget or time available to redesign it. I've been called in when the model didn't work, and they are on the clock and need a fix NOW. This is the real world of most Oracle consultants I think.
That being said, this is where the idea of a predictive modeling approach comes in. When consultants are called in, companies are often under the gun, they need solutions, even short term solutions, to get them out of trouble. Given success with short term solutions, we can then suggest to them that we might be better off finding long term solutions in the redesign of the database or the application.
Predictive modeling tools then provide one method to take an already bad situation and hopefully make it better. If written correctly such a model helps in the decision process and removes some of the burden from the DBA because it can predict whats going to go wrong, and WHEN its likely to go wrong. As a result, I can then prioritize my work. A predictive model is also a bang up method of proving to the customer that they do, indeed, have a HUGE problem with the design. Its evidence rather than conjecture. Its proof rather than guessing. Its instrumentation, and isnt that what science is all about?
Predictive modeling is used in many sciences, from Archaeology to Medicine. Im not sure what the resistance to such a thing would be in the DBA realm. Is it hard, sure, but who said anything worthwhile was easy? Don't dismiss an idea just because it's hard, for crying out loud.
Again, I agree with the premise that things should be done right from the beginning. I agree that, in most cases, one reorg is all you should ever need for a table, and that its a darned dangerous proposition. I agree that you can normally build/rebuild a database design so that it will be better. What I disagree with is the assertion that this is a possible/practical solution in a majority of cases. Sometimes you have to do what you have to do.
Respect to all.
February 27, 2005 - 11:29 am UTC
Robert
Care to comment on the technical validity of the PRECEPTS as put forth?
forget about predictive modeling and whatever for a moment, for that is not what this thread is about (I've alread said "I utterly believe in it", I'm all for it...)
I was, have been, and will continue to say "these precepts are not a starting point, they are 1/2 truths and inaccurate"
Period.
That is what this entire thread is about -- the very precepts upon which this "challenge" is based are not valid.
And no one seems to be disputing that except one, and they do not seem to be addressing any of the technical issues.
that is the bottom line of this thread. The very introduction to it starts with:
So, it would appear we should take the "precepts" as law. But I have a problem
with them. In order...
So, would anyone care to address these precepts that form the basis of this challenge.
This is my concern. People read it and say "hey, these are the law, this guy with 20 years DBA experience said so after all, it must be true"
My issue is not with predictive modeling, reactive modeling, reporting, analyzing data, trending information, whatever.
My issue is with these false "precepts". Without valid precepts (assumptions, governing rules, whatever) any model built will be a model of fantasy and not reality.
So, would anyone care to address these "precepts", they only advance and create yet a new generation of myths.....
Myth: rebuild a table when it has (truly) chained rows...
Myth: large table full table scans with migrated rows need to be rebuilt
Myth: freelists get unbalanced
Myth: rebuilding the mythological unbalanced freelist will speed (sic) updates
Myth: dictionary managed tablespaces seem to have something to do with freelist
management
I'd still like the know how it was measured that freelists were unbalanced -- so I too can "see it all of the time"
I'd love to see it and rebuild them and measure the speedup.
I won't even touch the rules of thumb proposed on the "polite" thread that is now opened for all to read:
<quote>
As for rules of thumb, how about these?
How about these ROT?
- Using optimizer_mode=choose with 10% of your tables analyzed is bad, very bad
- Always have an index on at least one column in your SQL WHERE clause with a large table
- The optimizer_index_cost_adj should be lowered to between 20-35 for OLTP systems (I got this ROT from a Tom Kyte presentation)
- Setting pctused to re-link a block with less room than the avg_row_len is wrong, always. There would be no room for the freed block to accept a new row.
</quote>
for I disagree with all of them as ROT (yeah, even the ROT from the Tom Kyte presentation since I do not recall ever once saying "ANYTHING is universally true". In fact, I am on record many times over and over saying quite the opposite. I can envision back in the 8i days before systems statistics and such saying "something to consider, but make sure you understand how it works")
I'd prefer people understand what they are doing and why -- otherwise, please just accept the defaults -- they are the safest ROT. Or let the self managing components "self manage" them.
<quote Effective Oracle by Design>
The effect of adjusting these two parameters is that they have a profound and immediate impact on the CBO. They radically change the costing assigned to various steps. This, in turn, dramatically affects the plans generated. Therefore, you want to test thoroughly the effects of these parameters on your test system first! I've seen systems go from nonfunctional to blazingly fast simply by adjusting these two knobs. Out of all of the Oracle initialization parameters, these two are most likely to be defaulted
</quote>
interesting thread indeed!
Menon, February 27, 2005 - 11:51 am UTC
Hi Tom
This thread has become fairly interesting indeed.
Even if Don's precepts are wrong (as you have shown),
Robert's essential argument seems to be valid, is not it?
Given that most systems are more often than not designed
incorrectly, there is indeed a value in having a model
that nudges a DBA in the right direction - even if it
is only say 80% of the time. Given that
a. there is value in such a model,
b. above precepts are not the right ones,
I wonder if there indeed would be the right set of
assumptions (not precepts:)) that could replace the
ones forwarded by Don? Getting things right in the first
place is the best thing for sure but more often
than not in the real world there is value in a
combination of short term and long term solution
(for example setting cursor_sharing=force as a
short-term solution and fixing bind variable usage
in the application itself as a long-term solution
in the scenario where bind variables are used as
has been advocated by you many times I believe.)
PS: OK - I guess I would apologize to Don about the
dress code comment (funny though it was - it was not quite
relevant) - though being called a "minion" indirectly
was not too polite either:) Having said that I must
agree with Bill above that when I need to find misinformation I go to web sites of self-proclaimed
experts and when I need real stuff I come to asktom)
February 27, 2005 - 12:21 pm UTC
I've said "I utterly believe in the ability to predict what will happen"
That is not even a question.
Two of my favorite new 10g features -- enhanced online redefinition, online segment shrink. That's in writing.
However, the issue here is the "precepts" and that seems to be getting lost in the discussion.
I believe the software is getting the ability to detect what might benefit from a re-org. We humans are the only ones that stand a chance to change the future however.
but again, my stated issue here is with these "precepts" -- a term I felt obliged to define so one can see the gravity of what it implies. True, I have not offered up my own set of precepts -- but I didn't see any reason not to question incorrect ones.
Sometimes it is a matter of scale
Mike Ault, February 27, 2005 - 8:35 pm UTC
Tom,
I find the concept of optimizing indexes for inserts, updates and deletes a bit daft. In my experiences the index is used to speed queries. If the index is causing issues with bulk insert, update or delete operations it is usually best to drop it and rebuild after the bulk operations complete. If we are talking single insert, update or delete operations I am afraid I would rather make the index more efficent for data retrieval.
Also, from examining many clients, they all seem to use the default array size. We usually tell them it would be best to increase this, but as things stand I would say a majority of Oracle users leave it at the default so usually it would be a fall-through in any type of predictive model of this type.
Usually it requires several weeks or longer for an index to get to the state where it may benefit from a rebuild. After a rebuild it may take longer to reach a state of entropy where again it requires reorganization since the rebuilt entires are probably older, more stable entries. So you are saying we should trade weeks of better performance just because eventually we will have to rebuild the index again? I think I used that argument on my Mother about cleaning my room when I was a teenager.
As to organizing the table by the order of the index, this would only be used for very specific tables after a careful analysis of the costs and benefits. Usually the problem is that there is a concatenated index on the table that has been built with the wrong column order, a simple rebuild of the index in proper column order can greatly improve the efficiency of the index, and yes, you have to make sure the SQL that uses the index is also properly adjusted.
As to the migrated rows issues, assuming that the bulk of updates on existing rows are complete, a rebuild to pack data into blocks can be a real performance boost. You then adjust the block parameters to optimize the blocks for subsequent insert, update operations. Of course with automated segment space management this may all be moot.
Just a few thoughts and observations. Remember what seems clear when run on a small, single user, laptop database may get very muddy on a large, multi-user, compex environment.
As to arguing about precepts. I support what has been said, if you can provide better ones, do so, we would all love to see them. However, just criticizing others is not really helpful and subtracts from anything positive that might be gained for the discussion, muddies the water so to say.
Mike Ault
February 27, 2005 - 9:16 pm UTC
<quote>
I find the concept of optimizing indexes for inserts, updates and deletes a bit
daft.
</quote>
Really? huh. Ok. fine. I'll let that go by (it is true that indexes are there for retrieval HOWEVER, the case that keeps getting thrown out time and time again is the fast full index scan which in a OLTP system would be used how many times? Oh yeah, not that many but we wouldn't want to keep that in mind -- so, ok whatever). I was talking about a transactional system -- single row modifications, but whatever. This never actually happens:
<quote>
Once a month the indexes are rebuit. (Alter index rebuild)
On the day following the indexes rebuild 50 logs are created
On the following days 45...40...35...30....25....down to 10 at 10 logs this
remains constant at 10 Mining the logs we see that we have increase INTERNAL
INDEX UPDATES
</quote>
If the retrieval is the most important thing -- Yes I would concurr, I just like to hear the other side of the coin from time to time. Rebuilding everything in sight is not a good idea.
<quote>
they all seem to use the default array size.
</quote>
and what is that (a default arraysize)? There is no such thing unless you choose a language and a version of that language and a version of the database. Given that we don't program really in sqlplus that is (which has a setting called arraysize and defaults to 15). Default prefetch of 1 in pro*c before 8i, default of 2 in 8i and above -- but Pro*c programmers do use host arrays frequently (in my experience). Bulk collect in plsql is fairly popular as well and jdbc -- well that is 10 but changes lots once the programmers find it. So, I don't know what a "default array size" is as it varies by language and environment?
So what is the number? 10? 2? 15? 100? (10g might make an argument for 100, but only in plsql but then again, there are lots of bulk collects happening of various sizes, but anyway)
You say
<quote>
Usually the problem is that there is a concatenated index on the table that has
been built with the wrong column order, a simple rebuild of the index in proper
column order can greatly improve the efficiency of the index
</quote>
I would really like to hear more about this? As the clustering factor could definitely change but the query would have to CHANGE the question it is asking as well.
I think (correct me if wrong) your point is:
if the data on disk is sorted in the table by A,B and you create the index on B,A -- the clustering factor will be "bad" for the index. If you create the index on A,B instead -- it will be good. Is that a correct intrepretation?
Because if so, something is really amiss. The orders of columns in the index are not at all driven by the order of rows on disk in the table, but rather how the queries access the data (which columns you use "=" on and which you use ">" or some other range operator -- if my queries used "where b = :b and a > :a" putting the index in order of (a,b) would improve the clustering factor but could be simply disasterous for retrieval -- rather than "find b" and "find a's in that b > :a", we have to search every a > :a for b = :b). Moving the order of the columns in the index might affect the clustering factor but would definitely play havoc with the queries that used them -- unless all of the operators in this concatenated index were "=", but then the clustering factor for a bunch of "="'s won't really affect the plans much? I'm not sure about this concept.
you say "i support what has been said"
on which side precisely?
do you support that freelists get "unbalanced" and that "this is seen all of the time" and the "we see update performance increases due to this"? or the contrary (it is not clear)
do you support that this unbalancing act happens only in dictionary managed tablespaces for some reason? or the contrary?
do you support that chained rows (truly chained rows, too big to fit on a block) benefit from a reorg? or the contrary?
do you support that migrated rows in a large table full table scan need to be reorged? or that the contrary is true?
do you see tons of index fast full scans in OLTP systems? (i mean, those are the ones that benefit most from the proposed rebuild and where explicitly called out)?
You see, yes, I keep coming back to these precepts -- which are not at the level I would start building a model from (and the idea that the precepts used for for a model need not be accurate is, well, I don't know what that idea is)....
I'll stand by the "index fast full scans will run faster" -- that is true (no kidding). However in an OLTP system -- if you are doing tons of those, well, not sure what to say there -- but perhaps in a OLTP system you do want to optimize for modifications (is that not why we have reporting schemas?). So, I'll still say "half the story" -- but a true half, definitely.
For the thing about arranged in index key order -- totally agree, however, please describe how one reorgs to do that? Or might you need to reimplement the physical schema and use structures to accomplish that long term (eg: it is not a reorg here, but rather a look at the schema that needs take place - because it is really hard to get that table "in order")
The Large table full scan stuff with regards to migrated/chained rows -- totally at a loss as to what to say (I agree, mirgrated rows can be an issue -- have written about that topic. Totally can be a problem, just not in a large table full scan -- but rather when using an INDEX).
Getting information right statement wise is somewhat relevant no? The assumption that the bulk of updates on existing rows are complete is not really real world is it -- in the event it was, sure a one time rebuild would be easy but it never is? How does ASSM fix the pctfree problem by the way -- that is the one parameter that ASSM does use (it is the freelists, freelist groups and pctused that are not needed with ASSM, but pctfree is still very relevant)
Table updates with the "unbalanced freelists"? I just don't know what to say.
At the end of the day - I had issues with the precepts and that is what I wrote aboutf.
If pointing out that something is wrong is not helpful... Well, I really don't know what to say to that either? We should just let it stay out there floating about?
Tell me, when someone comes to you with blatantly wrong stuff, do you:
a) say -- oh yes, very good idea (because we cannot opening say something is wrong)
b) say -- oh no, that is not right and try to explain why?
Things that are just plain wrong -- it does not matter if we have a small, single user , laptop database -- or a 4 cpu poweredge with 500 users. They are still wrong no? Does the number of users change the freelists precept?
This does not muddy the water -- I feel it helps clear them up (opinion). Freelists do not get unbalanced -- anymore than indexes do. Chained rows do not get fixed by a reorg. Migrated rows do not kill the performance of large table full table scans. Dictionary managed tablespace have nothing to do with freelist management. ASSM does not use freelists.
The precepts muddied my water.
I personally love it when someone points out a flaw in my posting. I leave the flaw and the followup and clarify. But anyway.
This has been a very instructive thread for me as well. I've learned that pointing out incorrect information is "not helpful". That the myths flying around should just be left to themselves. Or not. I'm sorry if defining a term "precepts" is offensive and if looking at the precepts and asking "do they need to be expanded considerably or just thrown out because they are based on 'not the way it works'" is wrong.
If the goal was to "create a reliable predictive model...", the precepts as supplied would send you in the wrong direction.
Some freelist thoughts
Mike Ault, February 27, 2005 - 10:03 pm UTC
Quoting from: "FREELIST MANAGEMENT WITH ORACLE 8I" by Stephan Haisley, Center Of Expertise, Oracle Corporation
"When using multiple process freelists, the amount of unused space within datablocks can increase. The reason for this
is a user process maps onto a particular process freelist using an algorithm (described below), and will not search other
process freelists for space if none is found within its own. If a large number of blocks are linked in a particular process
freelist, and another user process has no free blocks on its process freelist or no free blocks exist on the master freelist,
the process may request movement of the HWM or creation of a new extent. This could leave free blocks on the other
process freelist unused. Having said this, Oracle does try to evenly spread out the use of the process freelists to
eliminate this problem."
This would seem to indicate that given a single, non-parallel DML statement that will utilize a single process freelist, it will drive that single process freelist into unbalance and cause issues whereby the table extends evn though it has more than adequate free space to not extend. If the other process freelists as described by the freelists parameter have many blocks allocated to them, these blocks will not be unlinked by the other freelists, instead new blocks will be requested. This is what Don is referring to by unbalanced freelists and resulting issues, including poorer performance caused by excessive process freelist lengths. This is also aggravated in RAC when multiple freelist groups are used.
While this is from 8i, the behavior is consistent, from what I can find out, as long as ASSM is not in use through release 10g.
Mike
February 28, 2005 - 7:03 am UTC
Mike -- please read above (we've digested this note already)
And -- please -- I've been to this point before -- tell me how "unbalanced" freelists and the rebalancing thereof would lead to:
* Table updates will run faster after reorganizations when the table has
unbalanced freelists (with dictionary-managed tablespaces only). It is unclear
if bitmap freelists (automatic segment space management) will benefit.
being true -- I've already said "ok, lets say it were true, A) how did you measure it B) what effect does it have on performance -- it is a space management issue C) ASSM does not use freelists, and D) why the confusing reference to a dictionary managed tablespace?"
But the point is that the number of blocks on the other process freelists in a single freelist group is about "5".
And if you read that article, the section on transaction freelists, it describes what will happen with this parallelized delete:
<quote>
TRANSACTION FREELISTS
Transaction freelists are implicitly created by Oracle when a transaction generates sufficient space in the datablock to reduce the amount of used space to a value lower than PCTUSED and return it to a freelist (through updates or
deletes). If a transaction does not free up enough space in any datablocks it is modifying, then it will not be assigned a transaction freelist. Until the transaction commits, the free space made available in the datablock will only be available for this transaction. A transaction freelist is accessed exclusively by the transaction it has been assigned to. Once the transaction commits, the transaction freelist is made available for other user processes to search for free blocks. These blocks can then be moved to the master freelist. The algorithm Oracle uses to search the freelists for a free block is
described in the next section.
</quote>
When a transaction frees space, that transaction upon commit puts the freed space that remains (it may have reused it itself) onto the master freelist again.
My Precepts are perfectly valid
Donald K. Burleson, February 27, 2005 - 11:06 pm UTC
Hi Tom,
OK, apology accepted.
- First, I dont follow your Ill just have to do it over-and-over" argument. I liked Mike Aults analogy about cleaning your room:
</code>
http://www.dba-oracle.com/oracle_cartoons.htm
******************************************************
I dont buy the opinions about precepts being invalid. In fact, they are all accurate observations of real-world Oracle systems. If I look closely at your proofs, you validated 3 out of 4, and discounted the benefits of two of them because the benefit might be too small. Just to keep you out of the dictionary, I removed the work precept, so we can move-on and hopefully, discuss the issue:
http://www.dba-oracle.com/oracle_tips_proof_table_index_rebuilding.htm
Did I mention that my revised challenge has no precepts anymore?
******************************************************
- Do you really expect people to completely disregard a rule-of-thumb because it does not work in a small percent of the cases? That doesnt sound scientific to me. Ill take a tool with 95% validity any day. Here is how Oracle designs their predictive models in ODM:
http://www.dba-oracle.com/t_software_regression_modeling.htm
******************************************************
- You wrote: This stuff is actually very easy to predict. If you know the structures and how they work
Great! Then, since you know the structures and how they work, lets make this thread have some value and tell us how you would do it. Oh, and please dont retort with Id rather solve it before it breaks. We heard you the first time, and we agree!
Knowing that Redwood is building an index rebuilding prediction tool (AMT) as-we-speak, tell us about how it should predict the benefits of an index rebuild:
http://www.dba-oracle.com/oracle_tips_predictive_modeling_10g.htm
******************************************************
Last, whats this stuff about being a self-proclaimed scientist? I thought you have academic credentials in math, not science. Last I heard, database management has never achieved science status, despite the great efforts of Codd and Date:
http://www.dba-oracle.com/oracle_tips_credence_qualifications.htm
Whats this Oak Table club all-about? Are they really scientists? Did you read the exit manifesto of a former member of your Oracle scientist members? The ultimate irony here is that these are the EXACT SAME issues you have with me!
"The scientific minutiae, however "true" they might be, would merely confuse; the bold, sweeping statement, however simplistic, will nevertheless explain, despite the bold, sweeping statement not being technically accurate. I simplify like that every time I'm in the training room, and most times I post to c.d.o.s: but it is clearly incompatible with the pure science of Oracle espoused by the Oak Table.
There is also a danger that one gets so addicted to that sort of science that one forgets that it is, of itself, of practical relevance to a miniscule number of people"
******************************************************
Have you seen Dr. Kumars book Easy Oracle Automation? Its deliberately over-simplified, over-generalized and easy-to-understand. Yeah, it leaves out minutiae and exceptions too! Written by the IEEE Engineer of the Year, its simple, clear, and almost always correct:
http://www.rampant-books.com/book_2004_2_10g_auto.htm <code>
February 28, 2005 - 7:14 am UTC
apology? for what and where?
and who is being insulting? give me a break. Funny cartoons, great technical edge you have there. When you give up, you give up don't you. The clean room one is especially funny, I've been saying (to use analogies if you must) -- keep your room clean. Not "don't clean the room" -- but hey, whatever.
And you know what, if 2 hours of thought and fixing the problem forever (maybe you get $200k for that or are you saying that to figure it out and implement the change you take 6 months -- not clear but then again it is just a cartoon in jest right) can put off $50k of hardware -- you know what, I'm proud of that. Yes, I'll keep coming back to that.
I'm hardly anonymous Don, I don't think anyone other than you wonders who that anonymous guy behind the forum is. Heck, you and I even had dinner together once!
</code>
http://www.oracledba.co.uk/sanfran.htm <code>
My bio is in the magazine column every issue that goes with the forum, you seem to be able to discover I was a math major in college, anyone could see I too have written a book or two (and fairly glad to say a book or two as writing does take time to get it accurate -- and it is quality not quantity that counts).
In fact if you look at your "credentials" page, I've got the credentials of say a Cary or Gaja (you missed Gaja's books -- he's a published author as well, a google or two should clear that straight up).
<quote>
I thought you have academic credentials in math, not science
</quote>
Last I looked mathematics and my approach here have lots in common. But anyway -- so I have a math degree? Does that mean something?
And were do I make the "self proclaimed scientist", I am an Oracle database technologist -- true. Do I believe you can apply the scientific method here (and in life in general) -- absolutely.
I'll let Howard comment on your out of context quote of his.
I am not saying disregard rules, I'm saying being a tad more comprehensive and accurate would a model make.
No, your precepts were for the most part provably not valid.
Ault comments on unbalanced freelists
A reader, February 27, 2005 - 11:23 pm UTC
Hi Mike,
"If a large number of blocks are linked in a particular process freelist, and another user process has no free blocks on its process freelist or no free blocks exist on the master freelist, the process may request movement of the HWM or creation of a new extent."
Yes, that is exactly what I described, when dba_free+dspace shows free blovbks, yet the table extends wildly.
When you issue the "alter table xxx storege (freelists x);" command, Oracle adds an addional segment header somewhere in the table. Like I said my original PRECEPT, after a massive delete, all the blocks on the freelists get seriously unbalanced.
A table that extends while it has thousands of free blocks has a freelist imbalance, and a reorg will coalesce the freelists.
Again, my original precept is correct.
Here is the script I use to detect freelist inbalances:
set serveroutput on;
DECLARE
x number;
cursor c1 is
select
substr(table_name,1,13) table_name,
substr(a.owner,1,13) owner
from
dba_tables a, dba_segments b
where
a.table_name = b.segment_name
and
b.extents > 1
and
a.freelist_groups > 1
and
b.extents > a.min_extents
and
empty_blocks > 3*(a.next_extent/4096)
and
a.owner not in ('SYS','SYSTEM');
BEGIN
FOR c1rec in c1
LOOP
dbms_space.free_blocks(c1rec.owner, c1rec.table_name, 'TABLE', 01, x );
dbms_output.put('Number of free list group 1 entries for table ');
dbms_output.put(c1rec.owner);
dbms_output.put('.');
dbms_output.put(c1rec.table_name);
dbms_output.put(' is ');
dbms_output.put_line(x);
dbms_space.free_blocks(c1rec.owner, c1rec.table_name, 'TABLE', 02, x );
dbms_output.put('Number of free list group 2 entries for table ');
dbms_output.put(c1rec.owner);
dbms_output.put('.');
dbms_output.put(c1rec.table_name);
dbms_output.put(' is ');
dbms_output.put_line(x);
dbms_output.put_line('.');
END LOOP;
END;
/
February 28, 2005 - 7:26 am UTC
<b>(do you really use this? i mean, you join dba_tables to dba_segments by table_name to segment_name?
So you join MY index to YOUR table
and your table to your table
and your table to their rollback segment
seem to be missing quite a few predicates.
or did you just whip this freelist GROUP thing up (and why just 2 freelist groups -- what if it was the 3rd one? or the 4th, or the 5th? no loop over the number of freelist groups?
and with a locally managed tablespace, well, this predicate:
empty_blocks > 3*(a.next_extent/4096)
returns "I don't know" because a.next_extent is null in DBA_TABLES -- but given that freelist management differents not a whit between locally and dictionary managed tablespaces (I keep saying that, but you don't seem to hear it -- there is a difference between ASSM and Manual segment space managment -- but as far as locally/dictionary managed tablespaces - nope)
I don't see how this would frequently "see" anything in many/most 9i databases and even in 8i since it would not actually see most tables! (and given the missing predicates the comparisions with b. anything in the predicate are somewhat "random" -- you might be looking at b.extents for an index name 'FOO' and comparing that to a table named 'FOO')
</b>
that could show unbalanced FREELIST GROUPS to which there is no dispute whatever -- that has never been in question. The question was about process freelists.
So Don (same IP address within minutes of prior posting, have to assume it was you -- feel free to clarify otherwise), you've just proven that <b>freelist groups</b> may become unbalanced -- but again, we ask "in the context of your statement regarding performance -- what does this show? a performance issue OR a space management issue?"
And "alter table xxx storege (freelists x);" does not result in "Oracle adds
an addional segment header somewhere in the table."
It simply adds an initially empty freelist for the segment which will pull 5 blocks from the master freelist when it needs.
Since a "segment header" would be a block -- one would expect that adding a freelist would change the blocks used by a segment if this were true (sorry, this is a single user test, but it might still be valid):
ops$tkyte@ORA9IR2> create table t as select * from all_objects;
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 0
Total Blocks............................ 384
Total Bytes............................. 3,145,728
Total MBytes............................ 3
Unused Blocks........................... 3
Unused Bytes............................ 24,576
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 265
Last Used Block......................... 125
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> alter table t storage (freelists 2 );
Table altered.
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 0
Total Blocks............................ 384
Total Bytes............................. 3,145,728
Total MBytes............................ 3
Unused Blocks........................... 3
Unused Bytes............................ 24,576
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 265
Last Used Block......................... 125
PL/SQL procedure successfully completed.
Now, adding a freelist group -- that would do that (add another separate data structure), but that is not something you can "alter in", that requires a rebuild of the segment.
So, again, we are back to "where or how did you see freelists becoming unbalanced? and more importantly with regards to this discussion, what impact on *performance of modifications* did this have?"
Just one comment
Menon, February 27, 2005 - 11:24 pm UTC
This is from Menon (not Tom!)
"OK, apology accepted."
It was I who posted that "dress code" link - not Tom!
You almost make it sound that it was Tom. Please don't
do that.
Thanx.
Rebuttal to Burleson
Bob B, February 28, 2005 - 12:56 am UTC
As a consultant who's time is precious, being able to make "good" decisions in a timely manner is cost-effective for both you and your consultees. That does not, however, make it the correct way from the consultee's perspective.
</code>
http://www.dba-oracle.com/oracle_tips_proof_table_index_rebuilding.htm <code>
<QUOTE>
... the subjects preference to take showers vs. baths is an extremely reliable measure of self-esteem. Do we know why? No. Do we care? Not really ...
</QUOTE>
While a battery of tests provides a quick way of categorizing people's personalities, it does not give us understanding of what is going on in the person and how to correct it.
The same is true of a database. While running a battery of tests and applying heuristics to the results can provide a better running system quickly, we can get much better results by getting to know how things work in depth and correcting the problem rather than hiding the consequences. Heuristics are a good crutch to keep things running "as is" as best as possible, but what they do should be analyzed for the "low hanging fruit" to find and correct any flaws in the original system design.
They say duct tape can fix just about anything. Me, I'd like the right tools to correctly do the job (though I may use duct tape in the meantime!)
In answer to Don...
Howard J. Rogers, February 28, 2005 - 3:52 am UTC
Don quotes me, though he seems to be having some difficulty actually attributing the quote. But whatever.
As is typical of Don, he quotes out of context, and distorts my plain meaning in the process. I left the Oak Table because of a dispute with just one of its members, and the entirely *personal* evaluation of the right balance between technical detail and simplification to make a point which that dispute brought to a head for me.
Don should not attempt to use that personal assessment as though it were some form of criticism of the Oak Table group generally: as the entire piece from which he quotes makes clear, I continue to have no issues at all with the Oak Table group, its methods, its techniques, its goals or its members in general.
What Don needs to remember (or learn, if he's not done so already) is that in order to be able to legitimately simplify, you actually need to know what you're talking about in the first place. The technical underpinnings must be there, otherwise you aren't simplifying: you're distorting, misleading and myth-making.
When Don is in danger of overwhelming his audience with technical minutiae, you can be sure I'll let him know. Until then, he really shouldn't knock the only sure guide to effectively working with the Oracle database we have (and which I have precisely zero issue with): rigorous, technical understanding and proof.
Clash of the Titans
Tim Hall, February 28, 2005 - 4:32 am UTC
This has been a great thread. It's like the Oracle version of Ultimate Fighting ;-) Here are my thoughts...
Rules Of Thumb:
"All generalizations are false, including this one." (Mark Twain)
The trouble is alot of people rely on them as not everyone has the time to fully understand the concepts behind what they are doing. On many occasions I've found myself simplifying to the point where it becomes meaningless when discussing issues with others. For a certain proportion of the Oracle population you must say "don't touch it" or "here are some basic rules to follow". I don't like it either, but it's the reality I've come to know and hate.
Predictive Modelling:
I guess it really depends on the type of system. In the past I've worked on systems that are completely predictable and others that are totally random. It may be that predictive modelling can spot a pattern amongst the chaos, but I'm guessing this one-size-fits-all approach will have its limitations.
Proof:
Well, that's a difficult one. I can prove very simple observations, but recreating real life situations is tough. I agree with Tom's approach in this and other threads that you should do your best to give proof for every observation, but I am also aware that I've seen things I cannot give a simple example for. I try not to make a point of promoting unproved solutions, but that doesn't mean there may not be some merit in them.
I guess the important thing to consider here is that proof is very specific to a system and version number. I could say something like:
"In PL/SQL implicit datatype conversions are slower than not doing a conversion at all!"
Sounds sensible and appears to be true in 8i and 9i. Trouble is that Bug:4159053 shows that this is not the case in 10g. A statement that has been proved true time and time again in previous versions is now false, at least until this bug gets fixed. I'm guessing most of the people reading this thread have been promoting this false truth ;-) It's all relative!
Cheers
Tim...
Great debate
A reader, February 28, 2005 - 7:04 am UTC
This is great debate - I even logged in through the weekend to see how this progressed. There is a lot of passion out there on this!
When I graduated back in the days of weird haircuts, I worked on a project to math model a very complex real world system (the chemistry of auto exhaust over a catalyst - in hindsight way too hard for the computing power available then). Although it took my colleagues 20 years to complete the model (way after I left them) I did come away with some useful lessons on modelling (and some rules of thumb :) )
1) When modelling the non-trivial, the effort required to get a truly working and validated model is vast! (ROT)
2) Be very careful looking at cause and effect. If you see 'A' and 'B' does A cause B or B cause A or it just a coincidence?
3) Just because A has a causal relationship with B it does not imply that this the only (or even most significant) factor involved
4) All theories must be validated by rigorous measurement.
Returning to the predictive model debate. In my opinion for a model to be useful it must also include a cost/benefit measure for doing the re-org; i.e. is the effort in CPU/IO/availability to users terms worth any gains in performance. Given my current level of understanding this is a big task and my time may be better served fixing up other problems.
Having started my career as a kineticist, I keep finding that systems (including databases) reach equilibrium, make a change (without modifying the underlying model) and after awhile we are back to where we started. If a system needs frequent reorganisation to improve performance, I would look to removing the need to reorganise (that is, fix the design of the application), rather than paper over the cracks with a regular maintenance.
One last comment on Mr. Burleson's perception of being insulted..
Bill Schwartz - Minion of the Truth, February 28, 2005 - 9:45 am UTC
Mr. Burleson, I sincerely hope that our calling you out and asking you to prove your statements was not considered insulting. If so, you should most definitely not be in the consulting business, because if you showed up at my shop (at my cost and at my request), made several bold and definitive statements about how I could improve my database performance, and then failed to be able to back those statements up with reproducible facts, I would certainly ask you leave and not even consider paying you. As far as being one of Tom's "minions" is concerned, please be aware that we are not "Tom's minions" - we are minions of the truth, we love our work and want to be the best we can possibly be. We want facts, not myth. We want order, not chaos. And we want our mentors to be right....and when they are wrong, we want them to be adult enough (and professional enough) to say "Oops, you know what - I was wrong there, thanks for correcting me". I have been in I/T for nearly 25 years now, and the reason why I am still working in it is because I still don't know everything. Every day - EVERY DAY - I learn something new. How about you?
still drifting off-topic..
Connor McDonald, February 28, 2005 - 10:20 am UTC
Re metalink font...
"that is not a correction -- that is an enhancement request"
Tried that one...got rejected as "too hard"
(no need to reply to this one... I'm just having a rant)
David Aldridge, February 28, 2005 - 10:31 am UTC
Another cross-posting, this time from ... </code>
http://dba.ipbhost.com/index.php?showtopic=1239&st=0entry4382 <code>... isn't the internet wonderful?
Seems to me that there are two sides to this Challenge.
Firstly, "Identify (by measurement) structures in the database that might allow for performance improvement following some kind of reorganization"
Secondly, "Identify what opearations could be performed to take advantage of this in improving system performance"
It is very important that these issues are not confused.
So for example, take the object-specific measurements ...
* The table METER_READINGS is accessed by a large number of range scans based on METER_ID during working hours, in each case retrieving a small number of rows.
* The table METER_READINGS is accessed by a small number of range scans based on DATE_OF_READING outside working hours, in each case retrieving a large number of rows.
* The clustering factor of the METER_ID index is relatively high.
* The clustering factor of the DATE_OF_READING index is relatively low
What actionable conclusions can you draw from this data?
Well, not very much -- you learn that you can optimize daytime operations by re-ordering the table based on METER_ID. You can optimize nighttime batch jobs by ... well they're probably already optimized. But if you re-order the table by METER_ID then you're probably going to have to do the same job in a few weeks or months.
But while the measurements contain interesting data, they do not IMHO amount to useful information on their own. In order to take action there are other non-measured data to take into account. For example ...
* The customer service reps are finding the system to be embarrassingly slow during the day when they are trying to look up a customer's meter readings, OR ...
* Batch jobs are starting to spill over into working hours because they are not completing quickly enough
Once you have this kind of information handy, then you are getting close to something actionable. And the first action you should take is to estimate (by calculation or by experiment) what benefits can be gained by potential actions -- if you chose to rebuild the table as a hash cluster based on meter_id then what benefits can you expect to daytime operations, and what detriments will be incurred by nighttime batch jobs? (This is where an examination of wait events would come in handy, of course).
What if you cluster in the METER table also? What if you range partition by date_of_reading and re-sort the older partitions by meter_id? What if some meters are read hourly, some daily, some monthly -- what effect does that have?
I would say that this represents a methodology that is scientific enough for me -- measurements taken, inferences drawn, hypotheses tested, action taken ... etc.
Now, as I'm sure TK would point out all of those measurements were entirely predicatable at the design stage -- meter reads are loaded to a heap-organized table based roughly on the date when they were taken. The designers ought to anticipate this situation. As DB would point out, not all designers are smart enough, or are Oracle-savvy enough, to predict these issues. And you know what, they're both absolutely correct.
However, my sympathies are with TK on this one (and I'll admit that in this whirlwind of activity I may have missed a point or two) because he seems to me to be very accepting that some of the measurements might be valid, but is much stronger in emphasising that the possibilities for taking action are much wider, and are much more application and environment dependent than DB is admitting. There are a great many caveats surrounding index rebuilds and table re-ordering for example. I wouldn't mind an automated system producing and associating the measurements I gave as examples above, but I want to be free to interpret them myself. If it presented those facts to me then I'd be interested (maybe as a way of bashing the developers over the head, "Hey dummies, you missed this huh?"), but if it spat out a list of indexes that "need rebuilding" or tables that "need re-ordering" then I'd be really sceptical. Give me some facts and I'll draw my own inferences, thanks.
Let's discuss the AMT indexing predictive model
Don Burleson, February 28, 2005 - 10:57 am UTC
Hi Tom,
Thanks for the info! OK, my turn. Lets address the issues:
***********************************************************
Unfair jabs
I apologise for the unfair cartoons. Its not fair to lump all Oracle scientists into a single group, and I'm sorry. The doing it over-and-over cartoon was a response to your comments here:
But I would want to fix it ONCE, not over and over and over. I prefer to do a fix once and for all, otherwise I am reminded of the movie "Groundhog Day" where Bill Murray wakes up and relives the same day over and over (I'm hearing Sonny and Cher on the radio singing "I got you babe" over and over)....
By the way, Im still not convinced they there is any permanent solution to the index fragmentation issues like Mr. Cunningham described (except with the Oracle software, of course), but I agree that its a worthy goal.
I was also reacting to the distracting and inappropriate mudslinging in this forum and the insulting e-mails that Ive received, simply by bringing-up this topic:
- Mockery of my dress code
- "This is a classic thread on why Tom, you're one of the most outstanding and respected Oracle professionals in the business. - And why Donald K. Burleson isn't."
- "This is a contest I would not want to win."
- "I was shocked to discover what a serial offender this guy is"
- "It's remarkable how he continually publicly embarrasses himself by confirming his ignorance to everyone with what he writes."
Frankly, I dont understand why this discussion has to become personal and hateful.
Anyway, I should have resisted the temptation to participate in mudslinging. Im not very experienced at it, and Im sorry.
Lets stick to the technical issues. . . .
***********************************************************
Table extends with lots of free blocks:
Table updates will run faster after reorganizations when the table has unbalanced freelists (with dictionary-managed tablespaces only). It is unclear if bitmap freelists (automatic segment space management) will benefit.
My precept was simply to consider a table reorg when a table extends, even though it has free blocks (for dictionary-managed tablespaces only). The update speed issue was a reference to the overhead of constantly extending the table. Like I noted, Ive not yet seen this with bitmap freelists (ASSM), so its largely a moot issue for me.
Anyway, when this first crippled a database with me, we had a critical table with 50 freelists and very limited disk space. After a single-threaded purge, dba_free_space showed 3 gig free, yet the table immediately began extending, and I called-in a sev-1 iTar. A short time later, the table failed to extend, and over 200 data entry clerks were sitting on their hands.
I made some calls and a fellow DBA read me a document marked FOR INTERNAL DISTRIBUTION ONLY (or something to that effect), which described my issue, and noted that it only happens under very high concurrent loads. It had a title something like table extends despite having free blocks.
Ultimately, I was told to reorg the table and parallelize future delete jobs into 50 concurrent chunks, and all was well after that. This was about 7 years ago, it was on Oracle8i but I dont recall the exact release number.
Can you search on your employee only system for this note?
***********************************************************
Know Your Sources:
You wrote: I'm hardly anonymous Don, I don't think anyone other than you wonders who that anonymous guy behind the forum is. Heck, you and I even had dinner together once!
No, sorry, I was not referring to you. Whenever someone makes a bold statement yet remains anonymous, Im naturally curious to know if the person REALLY knows what they are talking about. To paraphrase you: trust, but verify and Cary Millsap know your sources. Here are my personal rules:
- A source has inside information (i.e. they wrote the source code)
- They have extensive credentials in data structure internals (MSCS, PhD)
- They were selected to attend a very competitive school (Oxford, MIT, Stanford)
- They have extensive experience with in their area of expertise
Also I wasnt being sarcastic when I noted that many members of your group are way too modest. In my experience, when someone does not publish where they went to college it often means that they did not go to college. For example, when I first encountered Jonathan Lewis I could find out nothing about his background or qualifications, and just like Cary Millsap notes, he was an unknown commodity to me. Later, when I discovered that Lewis has a masters degree from Oxford, it confirmed my suspicion that Lewis was a brilliant fellow, because mediocre people usually dont get accepted into prestigious programs.
Does this make me a snob? No, it makes me a realist.
Almost every day I see articles and whitepapers from strangers, and I need to know how much credence to give their work, just like Cary Millsap cautions.
***********************************************************
The point of this whole thread
Returning to the initial posters question (it was un-signed and anonymous), what are your thoughts on Sushil Kumars statement that AMT will soon identify indexes for rebuilding, and the goal of creating an independent predictive model?
I know that you dont like my proposed method, and thats fine. Thats why I posted the challenge, to get great minds looking at the problem!
Can you share how you would solve the problem, please? And everyone else too! Lets share ideas and make this fun!
February 28, 2005 - 11:56 am UTC
lauging out loud, that is what I've been saying (keep it technical, address the points), as I stated -- I don't edit other peoples content. I don't make people log in. If they use racist or foul language, I'll delete it -- but whatever.
Mr. Cunningham described chained rows, the result of inserting "small rows" and updating them to big rows. Something notoriously hard to fix with a simple reorg (ground hog day all over again), but by taking a second to look at the problem -- and applying your knowledge of how the row starts in life, how it grows -- you can come up with solutions (not just a rebuild which you'll do over and over). You can say "ah, all of the rows are grown right now, reorg, alter set pctfree higher for new stuff". You can say "ah, the rows are a mess right now, some with status='X' are done, status='Y' are half done, and newly inserted rows need something yet different for their pctfree -- lets rebuild in two stages and correct the problem". Or you can use Jonathans approach with the minimize records per block.
Your space utilization issue as you describe is classic freelist groups -- it would not happen with simple process freelists. <b>Your script as it is above would show you FREELIST GROUP issues, not freelist issues.</b> Your script as it is, would not show you problems with locally managed tablespaces -- even though they definitely have them. It would not show you issues with tables that were not analyzed via the ANALYZE command. And it is not an internals note to know this -- it is somewhat inherit in the design of freelist groups. One of the reasons ASSM was created. (since almost all freelist group use is under RAC or OPS).
I cannot find any such note and you have access to basically the same system I do, just with blue images and proportional fonts.
Would you care to comment on this:
<quote>
After a single-threaded purge, dba_free_space showed 3 gig free, yet the table immediately began extending,
</quote>
and help me understand how that relates to 50 freelists? That is, what exactly is the relationship between space that is not allocated to any segment -- dba_free_space -- and the management of space within a segment on freelists?
I would have to assume that BEFORE the purge there was 3 gig free in dba_free_space or someone else truncated a table (no freelists), dropped something (no freelists) or added disk) but it was not related to the purge -- and if 200 clerks typed in 3gig of data in a short time, well, ok (you see the 3 gig in dba_free_space was there to satisfy the extends)
<b>Could it be that your purge didn't put anything ON the freelists?</b> (hey, remove one row per block and perhaps nothing will happen, you can remove alot of rows, but still not put anything on the freelists).
You weren't looking at the freelists but unallocated space (apparently). That could certainly be an issue.
In retrospect, wouldn't it be nice to know "why" though? I get that question many times "we'd like to know why so we can avoid it in the future". I'd want to know why, but that is me. Again, this is a measely single user test, but it does demonstrate that a purge will not necessarily free anything
(and might be instructive as to why you want to design for a purge if possible so you would not be forced to reorg and rebuild after them?)
ops$tkyte@ORA9IR2> create table t pctfree 10 pctused 5
2 as
3 select rownum id, all_objects.* from all_objects;
Table created.
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 0
Total Blocks............................ 512
Total Bytes............................. 4,194,304
Total MBytes............................ 4
Unused Blocks........................... 113
Unused Bytes............................ 925,696
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 521
Last Used Block......................... 15
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> delete from t where mod(id,10) <> 0;
25020 rows deleted.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 0
Total Blocks............................ 512
Total Bytes............................. 4,194,304
Total MBytes............................ 4
Unused Blocks........................... 113
Unused Bytes............................ 925,696
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 521
Last Used Block......................... 15
PL/SQL procedure successfully completed.
<b>I just purged 90% of the data -- nothing on the freelist though. dba_free_space, it is not really relevant here, there are not blocks on the freelist for inserts... There could have been:</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
ops$tkyte@ORA9IR2> create table t
2 as
3 select rownum id, all_objects.* from all_objects;
Table created.
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 0
Total Blocks............................ 512
Total Bytes............................. 4,194,304
Total MBytes............................ 4
Unused Blocks........................... 113
Unused Bytes............................ 925,696
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 521
Last Used Block......................... 15
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> delete from t where mod(id,10) <> 0;
25020 rows deleted.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2> exec show_space( 'T' );
Free Blocks............................. 398
Total Blocks............................ 512
Total Bytes............................. 4,194,304
Total MBytes............................ 4
Unused Blocks........................... 113
Unused Bytes............................ 925,696
Last Used Ext FileId.................... 8
Last Used Ext BlockId................... 521
Last Used Block......................... 15
PL/SQL procedure successfully completed.
<b>Now I have almost 400 blocks on the freelist -- pretty much all of them (and if I had multiple freelists -- they would not have all ended on one).</b>
<quote>
Whenever someone makes a bold statement
yet remains anonymous, Im naturally curious to know if the person REALLY knows
what they are talking about
</quote>
and so I still cannot figure out who it was that prompted you to create that article and link to it here then.
I look at peoples work, experience, their writing, their contributions, their attention to detail. Lot's of people went to college and university. Does not imply anything at all.
<b>the point of this entire thread was to me the precepts.</b>
The proposed method -- building a predictive model, fine.
As someone you apparently respect said above:
.. has proved that the precepts are naive and simplistic, and that anyone who spends the effort writing a progam against them is almost certainly wasting their time.
That was my point here, that was the point of the thread. And hey, if Sushil is building it -- and he has internals knowledge, isn't our time perhaps spent better on understanding it...
Must ask this question - Mr. Burleson, please reply if you have time...
Bill Schwartz, February 28, 2005 - 11:24 am UTC
Mr. Burleson, I must question you on this. What does where I went to college (or not) have to do with my skill? A college degree tells me (when the person is a stranger to me) that they definitely know how to pass a test. It does not speak to whether or not that person has the capacity to learn. In my career, I have had to re-train several college grads who understood the theory taught in the classroom, but were unable to see how that theory applies (or doesn't apply) to real-life business situations. Theory is good, theory gets us thinking beyond the box. But the inability to learn from one's (or other's) mistakes is in and of itself the box. Does a college degree (or a teaching certificate) make someone an expert in their field? I think not. I would much rather hire a person with 10 years business experience who really knew their stuff even with just a high school diploma, than hire a fresh-out-of-college check-out-my-bachelors-degree person who didn't have a clue other than what (s)he was taught in the classroom. College is a great foundation - but you must develop the proper skills to build a solid house on that foundation.
Know Your Sources:
A reader, February 28, 2005 - 11:38 am UTC
Some people don't make a great play on where they were educated. The fact that I have a Oxford degree does not (or should not) affect the credibility of anything I may add to a debate. Debate should be judged on intellectual content and rigour of argument and not on any predisposition to ignore 'unapproved' sources.
Insightful Discussion
A reader, February 28, 2005 - 11:51 am UTC
Best to learn how to think, rather than what to think.
What a furball!!!!
Robert Freeman, February 28, 2005 - 12:19 pm UTC
Earlier Tom asked me my opinion of the technical merits of Don's arguments, and in a way, this thread have really made my answer for me. I think Don asked a question and I think that there is some experience behind his precepts (Ive seen rebuilding indexes work wonders for example). The answer from Tom had technical merit too (yes, rebuilding indexes is a short term solution in many cases and can have negative impacts). However I dont think this thread really answers the question at hand. Dons starts his post with:
The goal is to create a reliable predictive model that will suggest tables and indexes which will measurably benefit from reorganization.
And he proceeds to provide some precepts that he thinks relate to the goal, such as:
Index fast full scans will run faster after index reorganization whenever the density of the index entries becomes greater. In other words, it takes less time to read 100,000 entries from a 100 block index than reading the entries from a 500 block index.
Now, Tom in his reply takes each of these precepts to task, and we start this big debate over these precepts, and the technical merit of them. We then get piling on, and yet, we are missing the whole idea of the thread. The focus of the thread, in my mind, is predictive modeling for Oracle, its about how we can project performance problems before they occur. It turned, instead, into a furball of personal attacks (I cant tell who attacked who anymore), insults and some technical discussion. I would have preferred someone say, You know, the idea of a predictive model is great, but Im not sure your approach is quite right
than to spend pages here lambasting Don and his precepts. I honestly think Tom tried, but perhaps missed the mark a bit by concentrating too much on the "precepts", and his disagreement with them.
So, lets get past the precepts argument, and instead look at the question, what does it take to create such a predictive model? What information can we collect, and then what do we do with it?
What would such a model look like? How do we model data and provide logic that ultimately would be able to say, Oh dear, you have this index and its going to really start to perform terribly, you should rebuild it. on top of that, our model/logic could say not only should you rebuild the object, but also it might suggest that you could redesign it and while your at it, these other objects might benefit from a redesign effort, and here is how we suggest you redesign it (perhaps it would suggest using IOTs!). With such a thing, we bring Don and Tom into the same model!!! My goodness, would that then be the equivalent of string theory for Oracle databases? The Unified Theory of Oracle (UTO), what a concept!
With such a model in place we can make some educated decisions. If our model says we need to rebuild and redesign 15 tables because they are not efficient and, by the way, your most common SQL statements will improve their performance by 25 percent, WOW, I can start putting time and dollar estimates together and send them to management.
If that same model would give me the option of rebuilding 12 indexes and re-orging one table (and give me some suggested settings for such a reorganization so that it is as efficient as possible) then WOW, I can provide some short term improvements and also move towards making long term improvements. Is it possible that both models can live together? Is it possible that short-term solutions coupled with longer term responses might actually be the right idea?
Id suggest that we are typical of scientists in other fields. We have the skeptic who states that he would not want to try because of the mathematical complexities, typical of some in science who insist it cant be done, until someone proves that it can indeed be done. Then we have those who have an axe to grind, also typical of the scientific community. Finally we have the theorists who insist that their theory is the only way to explain the universe, and that everyone else just doesnt get it. Finally, we have those like me who just want a nice, clean, technical discussion that stays on point and doesnt digress into a fur ball.
The thing is that I know a great many of the people here, and like them all. Each of you has the capacity to contribute to this discussion without digressing into dialogue that is counter productive. Each has a point of view that, perhaps, has some truth. I think I like this quote from Niels Bohr about truth, and I think it applies here:
The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth.
I know Tom, I know that he has an interest in the truth, I know Don and I know he has an interest in the truth. I know many of you here, and all of you have the best of intentions. As for me, while I see some benefit in debate, I see more benefit in collaboration. Do we really like furballs more than we like productive dialogue?
If Im wrong, well, what can I say, Ive been wrong before.
Cheers and respect to all!
February 28, 2005 - 12:57 pm UTC
Robert
(cheers and respect to all with an intro of furball, indeed)
Please -- this is a thread over here, not a thread over there. I started with "I don't agree with these precepts" and sort of ended there. I fleshed out one of them (index rebuild, needs more work and here is why). One of them is not solvable via a reorg, but rather needs a physical schema object to beat it. Two of them are flat out *wrong*. Call me a stickler but
o tell me do freelists get unbalanced?
o and only in dictionary managed tablespaces?
o that rebuilding freelists will make updates go faster?
o that chained rows are fixed by a reorg?
o that migrated rows materially affect large table full table scans?
My goal -- myth busting before the myth gets entrenched. No one has answered any of my technical questions? How do you detected unbalanced freelists? What is the benefit speed wise? That script proposed above, forgetting the SQL that isn't quite right for a moment, or the logic that assumes "2" is the maximum number of freelist groups -- won't do it.
And the story behind that freelist rule -- dba_free_space? not at all curious "why" (I'd suspect a bad pctused, an extremely unlucky purge routine, and having 50 or 500 or 1 concurrent delete process would do the same -- I can show that). It is doubtful that freelists actually had anything to do with the table growing (and it grew 3 gig quickly -- there was 3 gig free, why were 200 clerks able to type in 3gig of data so fast? this purge must have freed up hundreds of gigabytes in order for work to go on for a couple of days).
Shot a gun into the room and got a bad guy -- but then again, this system that is short on space but generated 3gig of next extents "in a short period of time"? Forgive me if I sound skeptical about the story -- ok, it was 7 years ago (that is how myths get started though isn't it - bits of information from the past explained away... google "define: myth", I won't dare put another definition here!)
I agree that rebuilding indexes will achieve precisely one of the three things following:
a) it'll make some things faster
b) it'll make some things slower
c) it'll not affect other things at all
as I said, it does appear that the predictive model is being built -- I mean, Don keeps telling me how much money Oracle is apparently putting into this -- so, I would like to kill the myths and move one. That is all.
Scientist, there is that word again.
I don't have an axe to grind, I just hate technical inaccuracies stated about how the software *actually works*. Well, I hate inaccuracies in general.
And I did in fact add to your list of things to consider, when building this predictive model, consider
o array sizes used (you'll have to input that most likely)
o whether the goal is "make oltp response time as fast as possible" or "optimize the fast full scan" (which makes you more money?)
o migrated rows and INDEX RANGE scans would definitely be something to consider (but not full scans, not true).
o that the model should perhaps suggest "this is a candidate for an IOT to preserve the order of the rows upon insert (yes, this too penalizes MODIFICATIONS like the rebuilding of the randomly inserted into index will in order to make retrieval much better). Instead of "reorg this puppy" (I'd still like to see the steps involved in that, in getting a table reorg'ed, in a sorted fashion).
I'll concede once and for all here (I guess i did above too), yes, an index rebuild/coalesce can benefit -- almost always on the sweeping left to right thing where you delete most but not all of the right hand elements over time. That is easy to demonstrate and easy to find. However, the example I put above, the randomly inserted into index -- it pretty much "isn't" a good candidate -- why? because the index immediately starts to break down again and does alot of work doing so. I've seem quite a few people who believe it is in fact NOT DAFT to optimize the system for the work they do (OLTP) -- but so be it. Those indexes -- by rebuilding them, you could be causing more problems than you solve (if you solve any). That index is going to be 40-60% empty soon.
Well said, Robert
Peter Scott, February 28, 2005 - 1:08 pm UTC
As one that may fit into your sceptic camp - (sorry, my earlier modelling post was anonymous) I see great problems in building a universal predictor. I am not saying it can't be done, but it would be hard. Not least because there is so little clarity as to the meaning of "broken" and as to a consensus on how (or if) to fix....
A successful model requires a lot analysis and this in turn requires a lot of data capture to provide the raw information to prove (or not) the precepts, this to me would have to be a collaborative effort given the diversity of Oracle database in the wild.
This is Tom Kyte
Tom Kyte, February 28, 2005 - 3:56 pm UTC
I removed an unattributed comment from above, I felt is could be construed "inflamatory"
February 28, 2005 - 4:00 pm UTC
change "is" to "it" (i have no edit capability on the comments... only delete, which I use hardly ever use and generally only when I email them, but there was no email with it this time...)
Bill's question on college:
Don Burleson, February 28, 2005 - 4:12 pm UTC
Hi Bill,
Bill wrote Mr. Burleson, I must question you on this. What does where I went to college or not have to do with my skill? A college degree tells me (when the person is a stranger to me) that they definitely know how to pass a test. It does not speak to whether or not that person has the capacity to learn.
Well, interesting point. . . .
In my limited experience, I've noticed that anyone wearing a Brass Rat can learn technology at incredible, even astonishing speeds!
</code>
http://web.mit.edu/newsoffice/2000/ring-0315.html
Did you know that MIT does not consider a language like C++ to be a legitimate academic topic? Its considered a trade school topic (according to a MIT friend) and you have to learn it yourself, before the first day of the data structures class. Ive never met an MIT grad who was not amazing, at many levels, bit most of all, their ability to learn fast, very fast.
Ive also noticed this high-speed uptake with graduates of Stanford, Harvard, Duke, West Point, and many other first-tier universities. For example, Col. John Garmany, with his background in Electrical Engineering and Oracle, can diagnose performance problems at every level, right-down to the chipset instruction, and Im amazed every time I watch how quickly he grasps and solves a problem. Is it his West Point discipline, his MS in software engineering, or his training as an Army ranger? I dont know, but it sure is amazing to watch!
I make no secret that I prefer people with the ambition, drive, brainpower and perseverance to take-on some of the worlds most challenging science programs:
http://www.dba-oracle.com/jobs.htm
Of course, there are notable exceptions, but lets face it, it the world today you often dont get a chance to get to know people, and a resume provides you an excellent insight into the mind-set and experience of the stranger at the other-end of the web. . .
Now, as for someone claiming to be a scientist, the bar is raised even higher. To me, every scientist Ive ever met conducts and publishes scientific research, and I dont think that Im a scientist is a title to be thrown about carelessly.
http://www.dba-oracle.com/oracle_tips_credence_qualifications.htm
Great question! These are just my opinions, but there are lots more here:
http://dba.ipbhost.com/index.php?showtopic=1249 <code>
February 28, 2005 - 4:26 pm UTC
(Not that is really matters, but who is claiming to be a scientist here that you keep bringing it up and up and up?)
unbalanced freelists - again
Don Burleson, February 28, 2005 - 4:31 pm UTC
Tom wrote <quote> After a single-threaded purge, dba_free_space showed 3 gig free, yet the table immediately began extending, </quote>
and help me understand how that relates to 50 freelists?
Sure. I was told that the table had a freelist imbalance, by a member of the Oracle COE.
Heres what happened:
- Really big table (25 gig), 20-40 concurrent updates at any time
- Running out of disk space, management wont buy more
- On Sunday a single job purged out 3 gig of rows
- On Monday, table immediately began extending, costing the company a fortune in unplanned downtime
- The DBA is punished
Tom, I dont think I have the same assess as you because I cannot see any of the Internal only notes, plus I cant get to the Tar after 7 years.
QUESTION SIR: If I can get this not for distribution document, can I publish Oracles experience with unbalanced freelists, or will I get sued?
Its true, I tell you! True! It happened! Really.
But regardless, you concede that in RAC this happens, and the table would benefit from a reorg, right?
Anyway, is there any chance that we can discuss how to leverage the wonderful AWR historical performance data? Its a gold mine of information. Do you agree?
With utmost respect,
DB
My background: </code>
http://www.dba-oracle.com/resume_don.htm <code>
February 28, 2005 - 5:09 pm UTC
but dba_free_space is space NOT ALLOCATED TO ANY SEGMENT.
It is free space *outside*, not allocated, for anyone. It has nothing to do with freelists.
If a single job purges 3gig of rows, it isn't going to show up in DBA_FREE_SPACE (at least not before 10g and not unless the table was in an ASSM tablespace and not unless you said "alter table t shrink space compact" after the purge process)
If someone from the COE said "hey, you have unbalanced freelists because they all went into dba-free_space", THEY (the analyst) was unbalanced. And wouldn't you be able to tell them the THEY are unbalanced yourself if they somehow connected 3 gig of space in dba_free_space with unbalanced freelists? (dba 101? no possible connection between dba_free_space and freelists..)
If the table immediately began extending -- you had 3 gig free for it to extend? For you see -- dba_free_space is not "freelist space". So, why couldn't it extend? (perhaps the tablespace was fragmented beyond belief -- but that fix is simple and fast -- alter table set the next extent size so it can use the smaller bits left lying about in the free space).
I guess I just cannot see how a table could be in a tablespace with 3gig free and you get "stuck" for more than a couple of minutes (to determine there is freespace, to determine that next extent is apparently bigger than the largest free exent, to alter the table to set the next extent to something small enough that it can use your 3gig of space)
So, I am totally and utterly failing to see the connection between the 3gig in dba_free_space you seem to think got there from a bunch of deletes (you do know that a delete will NOT give space back to the system at large -- it'll put it on the FREELISTS maybe -- but not into dba_free_space). What you say and the reality of the way the software works just are "not in even the smallest bit of alignment" here.
I don't disagree something happened -- I'm saying "what you are saying, especially when you through in dba_free_space having 3gig free and that you deleted 3 gig of row data" does not "make sense".
Blocks that have been deleted from would never be anywhere near dba_free_space.
You would not have, could not have run a query to see that there were unbalanced freelists (in fact, you would have to dump blocks to trace in order to analyze them). Freelist groups -- sure, you can run a procedure.
But dba_free_space? that is totally confusing to me and making me say "what?" and makes the whole story sound a bit fishy -- leads me to rather believe that pctused was set inappropriately or the delete, while it deleted 3gig of data, just did not take the blocks below their pctused threshold. That explains it nicely -- that happens all of the time. But it would not explain space in dba_free_space (that if it where there, you should have made it "available" to the table)
And your story points out why you want to understand what's going on, rather than just reacting when the stuff hits the fan.
Why was the DBA punished? What did they do wrong?
If you can get an internal only note, feel free to email it to me, sure. I cannot give you permission to publish publically something stamped "confidential".
and just slightly off topic -- You said you see this "all of the time, constantly", do you really? How do you detect it? Since you cannot query freelists?
predicitive model - points of view - discussion
harry Conway, February 28, 2005 - 4:36 pm UTC
February 28, 2005 - 5:10 pm UTC
binary search, Don already posted his more polite discussion as he called it ;)
and yet Don,
Bricklen, February 28, 2005 - 4:44 pm UTC
after all that, you still didn't answer Tom's question about who you are referring to as this self-proclaimed scientist. You seem to have a knack for skirting direct questions.
Oracle Scientists Network
Don Burleson, February 28, 2005 - 4:53 pm UTC
Tom wrote "(Not that is really matters, but who is claiming to be a scientist here that you keep bringing it up and up and up?)"
Er, OK. Sorry if I was vague.
The Oak Table makes it very clear that the members of the Oak Table are, by virtue of their membership, Oracle Scientists. Its right there in the mission statement:
</code>
http://www.oaktable.net/main.jsp
The OakTable network is a network for the Oracle scientist, who believes in better ways of administering and developing Oracle based systems.
Its also right-here, in your publishers biography:
http://www.apress.com/article/articleDisplay.html?aID=25
By Spring 2002, the whole thing had grown. One evening, Mogen realized that he had 16 world-renowned Oracle scientists sitting around the dining table. . . Today, a total of 42 people have been admitted to the OakTable, perhaps half of them working for Oracle (there's an up-to-date list on the website).
People seem to think you are an Oracle scientist, and Ive never seen you correct them:
http://www.amazon.com/exec/obidos/ASIN/1590592174/002-2430817-1052821 <code>
Thomas Kyte, Jonathan Lewis, Steve Adams, Cary Millsap, James Morle, and now Connor McDonald, who are members of the OakTable Network (which i've nicknamed the "Oracle Scientist Club".
Am I missing something?
February 28, 2005 - 5:21 pm UTC
Don,
I belong to an email list. Whatever. Do you see me saying "hey, this is asktom of the oaktable network". Give me a break. I belong to an email list -- that's about it. I make no claims of being a "scientist" or a astronaut or a football player.
I'm a database technologist, that's about it. I use what looks suspiciously like proofs and scientific discovery (since high school students are taught this, I feel OK in saying "i do it").
If I corrected everything said about me, well, geez. I'd spend lots of time trying to get cartoons right.
It is a valuable source of technical information, great discussions.
Oh go ahead Don, call me a scientist -- that's fine I guess. I'm just a math major from college so I'll be scraping the bottom of the barrel.
This is asktom, not the oaktable network -- didn't think the logos were that confusing.
thank you
Bricklen, February 28, 2005 - 4:57 pm UTC
for not skirting.
Freelists again - again - assistance for Don Burleson
Jonathan Lewis, February 28, 2005 - 5:12 pm UTC
Don,
The point that Tom was trying to make about dba_free_space is this: When you simply delete rows from a table, the table´s high water mark does not move.
The fact that you could see 3GB in dba_free_space meant there were 3GB of blocks that were not allocated to any data object, it did not mean that you had freed up 3GB of space in the table.
As far as your question on RAC and "unbalanced freelists" is concerned, the freelists do not become "unbalanced", the freelist groups can become "unbalanced", and then you can "rebalance" them withouth re-organising the table by calling the dbms_repair.rebuild_freelists procedure (which has been around since at least 8i).
There was a bug in 7.3.2, fixed in 7.3.3, whereby freed blocks were added to the process (a.k.a. segment) freelists rather than the master freelist. This may have been the source of the document you are talking about. But it would not have applied to the version you were running.
In passing, one of the best reasons for investigating problems properly and understanding root causes, is that you can then build a test case to re-create the problem. Then (for important cases) you can run the test case on the next upgrade to see if the problem has been fixed. That way you can stop giving out-of-date, resource-intensive, and potentially dangerous, advice as soon as it becomes redundant.
Anyone here interested in predictive modeling?
Don Burleson, February 28, 2005 - 5:12 pm UTC
Is there any hope of discussing the value of historical Oracle performance data here?
- Those who forget the past are condemned to repeat it
- Many Oracle databases have repeating patterns of usage
- AWR is a gold-mine of detailed historical information
Like JB noted in his RMOUG paper (sorry we missed you this year, Tom), intelligent exceptions alerts (based on statistically-valid historical data) are coming, soon. Im really excited about it!
Ive already written my own predictive tools, and they work quite well. By analyzing historical trends (averages by day-of-the-week and hour-of-the-day), Ive cobbled together a framework where you can indeed fix Oracle performance issues BEFORE they occur. See my book Creating a Self-tuning Oracle Database:
</code>
http://www.rampant-books.com/book_2003_1_oracle9i_sga.htm <code>
Alexey Danchenkov is also developing predictive models for his Workload Interface Statistical Engine (WISE)tool:
www.wise-oracle.com
Anyway, when I revisit this thread in two years, Ill venture my own prediction. By February 2007 these Oracle predictive models will have become mainstream, well-accepted, and commonplace.
Am I the only person here interested in examining the dba_hist_sqlstat and dba_hist_sql_plan information?
With great respect,
Don Burleson
Oracle scientists
Ajay, February 28, 2005 - 5:17 pm UTC
Don,
Isn't science about reproducible results vs. opinions?
To Jonathan Lewis
Don Burleson, February 28, 2005 - 5:37 pm UTC
Jonathan Lewis wrote "There was a bug in 7.3.2, fixed in 7.3.3, whereby freed blocks were added to the
process (a.k.a. segment) freelists rather than the master freelist."
Yeah, good point. The version-specific issues become a real problem. I've started dating my web pages and noting the details of the release number.
Anyway, I wish I'd changed the word "precepts" to half-baked invalid observations by a fat aging DBA with a bad memory". It would have saved me a whole day of typing. . .
As I understand these models, you don't need precepts anyway. The model analyzes the history, determines if it has reliable patterns (to ensure that the prediction is statistically valid) and perform hypothesis testing to see if any of the table/index metadata can accurately predict a benefit from a restructure.
I still maintain that (so long as the predictor is proven to be a statistically valid measure of benefit from the AWR data) that the rules does not need to make any sense at-all.
</code>
http://www.dba-oracle.com/t_software_regression_modeling.htm <code>
Like I note above, the preference for taking showers or baths was found to be a valid predictor for self-esteem (nobody would have guessed it is a zillion years), and that statistical reliable is what counts, right?
Also, predictive model deal in probabilities, not absolutes, and that may be the source of the confusion here.
Since I was thinking probabilities I thought that my precepts were true most of the time, therefore they might be valid predictors. It looks like Tom was working in absolutes and when he found cases where they were not true, they were rejected. (Please correct me if Im wrong, Tom).
Anyway, Jonathan, what are your thoughts on the future of Oracle predictive modeling? Will it turn-out to be a flash-in-the pan or a embraced approach?
February 28, 2005 - 5:52 pm UTC
but -- people do take showers and baths, those are accurate facts, things that actually happen in real life.
rules might not have to make sense, however they better be *possible*.
sigh, i give up. How can things that are not true be true most of the time?
freelists unbalanced? (and please, you say you see it *all of the time*, like alot -- how? how do you see it)
even if there were such thing -- how so affecting performance? space management (i think we can all agree on that) but performance?
large table full table scans with migrated rows?
reorg to fix chained rows?
These are not mostly true events, they are false statements.
What is up with dba_free_space? How did that fit into the scenario again? (and if it had 3gig, why was there any down time at all? maybe that is why the DBA got yelled at)
Is that script you use to find "unbalanced freelists" really something you use all of the time?
I guess I'm picky about the facts, yes. Doggedly so.
half-truth is worst than a lie
Alberto Dell'Era, February 28, 2005 - 5:42 pm UTC
Mr Burleson, please quote my review from Amazon COMPLETELY!
"Which i've nicknamed the "Oracle Scientist Club", since in essence, the reason why they are so succesfull it's that they follow the aforementioned method in investigating Oracle."
So, "Scientist = He who follows the Scientific Method",
AS STATED IN THE REVIEW!
There's nothing to do whatsoever with "High IQ" or "Nobel Prize" or "Rocket Science".
Using this definition, I feel that the OakTable members calling themselves "scientist" is PERFECTLY appropriate, and i'm sure (without even knowing anyone of them personally) that that's the meaning they intend.
And please note that i said " *I* have nicknamed ", since my calling "Scientists" people like Tom or Jonathan Lewis date backs to the good old days of deja.com, many many years before the OakTable was born. On deja.com I noted their way of investigating and explaining, absolutely loved it, followed them since then. And that has been the best choice of my "career".
For Ajay
Don Burleson, February 28, 2005 - 5:45 pm UTC
Ajay wrote "Isn't science about reproducible results"?
Now, I don't pretend to be a scientist here, but I do know that "reproducable results" are NOT required for forecasting and modeling.
</code>
http://folk.uio.no/rnymoen/forecast_background.htm <code>
Methods of forecasting include
- guessing, "rules of thumb", and informal models";
- extrapolation;
- leading indicators;
- surveys;
- statistical time series models; and
- econometric models.
For example, when I make a forecast I have a "confidence interval" (the beta) which expands out the father I go out in time. We used to call it the "trumpet of doom", because long-range forecasts had such a poor confidence interval.
Maybe it depends on the kind of science? I don't know. Any scientists here?
February 28, 2005 - 6:14 pm UTC
Nope, they've all gone home.
Definition of scientist?
Don Burleson, February 28, 2005 - 6:04 pm UTC
Hi Alberto,
Sorry I chopped-off your review.
Alberto wrote ""Scientist = He who follows the Scientific Method", AS STATED IN THE REVIEW!"
Yes, that's my point, exactly! Thanks!
Tom says "I make no claims of being a "scientist""
By your definition, Tom is a scientist, despite the fact that he claims otherwise, right?
February 28, 2005 - 6:27 pm UTC
Making no claims is not claiming the opposite Don. There are lots of things I don't claim.
I find it all somewhat amusing and boring at the same time (the scientist "dig", that is what you are trying to do with it) -- for I am failing to see the relevance.
I guess if I said I was considered one of the top Oracle Database experts on the planet, my words would carry more weight? (I don't make that claim, that would be a foolhardy claim for anyone to make -- I'm still learning Oracle every day)...
What was the point you were trying to make -- in this thread by including the credentials link? What was the implication there?
But you can call me mud for all I care really.
Still extremely curious about dba_free_space.
It is almost like using "cursor_sharing=force" to solve a massive enqueue wait problem. Something that also doesn't make sense, but hey who knows eh?
Definition - A Furball
Robert Freeman, February 28, 2005 - 6:30 pm UTC
Just a quick note, I wasn't calling Tom a Furball. A furball is a dogfight that has gotten really nasty... man against man, plane against plane... no tactics, no strategy, just pure skill. A dogfight dosen't get much worse than a furball.
February 28, 2005 - 8:31 pm UTC
Robert -
thanks for clearing that up (and thanks for the offline email as well).
Furball was always fighting words where I came from :)
Yeesh - back to the silly precepts AGAIN!
Don Burleson, February 28, 2005 - 6:36 pm UTC
Hi Tom,
Tom wrote "rules might not have to make sense, however they better be *possible*. sigh, i give up. How can things that are not true be true most of the time?"
Are you really suggesting that these cases below are NEVER true? May I quote you?
Flame away, Tom, but these statements are self-evident, obvious, and correct in the vast majority of cases:
- Index fast full scans will run faster after index reorganization whenever the density of the index entries becomes greater.
- Multi-block Index range scans will run far faster when the data blocks are arranged in index-key order.
- Large-table full-table scans will run faster after reorganization when the table has excessive chained or relocated rows, or low block density after massive DML (updates and deletes).
February 28, 2005 - 7:00 pm UTC
Don,
I pointed out the half side of the true one (index fast full scans sure will). I think I even said obvious (yup, i did).
three things are always true (but i would argue not all that commonly true, meaning sure they are true but not common -- I mean, how many fast full scans of an index do you do on an active OLTP system? or large table full scans on an OLTP system with data purges?)
o index fast full scans of a smaller structure will take less time/resources than that of a larger structure.
o if the table is sorted by the index key order will take less IO/CPU overall to process large range scans.
o large table full scans of fewer blocks will take less time/resources than that of a larger structure.
Absolutely.
However,
o chained rows -> reorg in order to make a large table full table scan faster?
o migrated rows-> reorg in order to make a large table full table scan faster?
o and I guess the 4th one up there just fell completely off of the radar screen? (you might need to peek at your silver bullets paper -- you say the same thing there about freelists and the way they "don't" work, unbalanced, etc). It is the myth in the making that is killing me here.
But hey those multi-block index range scans sure will go faster if the table is sorted right (please tell us how to do that? List the steps. Might it be something you don't want to do more than once? that is my point). but yes, definitely true, just would hope more than reorg would come to mind.
I've pointed out the patently false ones over and over (lost count) chained rows, migrated rows, freelists.
Oracle scientists
Ajay, February 28, 2005 - 6:50 pm UTC
Don,
I agree that reproducible results are not required for forecasting or for modeling. However, forecasting and modeling are best used when you have actual cause and effect relationships. Something that you can extrapolate. Minus that, its just a thought experiment at best.
Can we get a test case or two that settle this unbalanced freelist issue for once and for all?
February 28, 2005 - 6:59 pm UTC
See MetaLink doc id 157250.1, "Freelist Management with Oracle 8i"
by Stephan Haisley
Stop ducking and weaving Don.
Howard J. Rogers, February 28, 2005 - 7:04 pm UTC
You wrote: "Flame away, Tom". Sorry... which planet are you on? Show us where Tom has flamed you. (Hint: pointing out that the SQL you write won't run properly, won't diagnose properly, and cannot have done what you claimed it did, isn't a flame).
Breathtakingly, you say that 'Index fast full scans will run faster' after a rebuild, as if this is news, and as if Tom ever denied it. He in fact agreed with this statement AGES ago... and then asked the real questions: so what? And at what cost? Who does index fast full scans in an OLTP environment? And are there not segment-type selections and similar database design issues that could resolve the matter better, completely and forever? You never answered any of those points when Tom made them, but now you imply that Tom somehow claimed smaller indexes won't scan faster than big ones!
If one bout of mindless repetition isn't bad enough, you then re-assert that "Multi-block Index range scans will run far faster when the data blocks are arranged in index-key order." Yet another "Duh!" statement. The real point there, as Tom also pointed out, is: and how are you going to re-arrange your table into index key order online, cheaply and forever? Tom asked you those questions earlier, and you didn't answer then.
And finally, " Large-table full-table scans will run faster after reorganization when the table has excessive chained or relocated rows". That is 100000% false. Because a CHAINED row is by definition too big to fit into an Oracle block however many times you care to re-organise your table. Another point which Tom explicitly made to you earlier. If you mean a MIGRATED row, then say so. But since you can't seem to get any of the basic Oracle naming conventions right (ASSM abolishes freelists, for example; it's not a bitmap freelist) then I suppose asking you to adhere to reasonable standards of precision is just a waste of breath.
Don: instead of just pretending that your reputation hasn't been dragged through the mud by your own inept SQL example, your own demonstrably feeble grasp of how Oracle actually behaves, and your propesnity to make stories up when mere facts can't support your claims, you'd be better of actually studying how Oracle works for a bit.
Here's one for starters: DBA_FREE_SPACE reports on blocks which are not allocated to a segment AT ALL. Here's another DBA_FREE_SPACE and FREELISTS are two completely different things. And one more: FREELISTS and FREELIST GROUPS are two completely different things, too.
Oracle Scientist - clarification for Don Burleson,
Jonathan Lewis, February 28, 2005 - 7:12 pm UTC
Don,
You commented in an earlier post that "database management is not a science despite the efforts of Codd and Date". You seemed to be suggesting that the concept of 'Oracle Scientist' was meaningless. On the other hand, I note that in one of your own articles you have written:
Cary is the "real-deal" Oracle scientist.
So presumably you do have some idea of what it means to be an Oracle scientist. Your article also has the line "You can instantly spot a fake Oracle scientist when ...", so presumably you must have some concept of what a 'real' Oracle scientist is. Moreover, on the more abstract concept of 'what is a scientist', you reference a presenation by Cary, which includes a quotation from Richard Feynman:
"Science is the belief in the ignorance of experts".
Since Richard Feynmann was arguably one of the finest scientists of the 20th century, I think it reasonable to assume that his musings on science and scientists have some validity. Time and again, he makes the point that a scientist is someone who thinks, questions, assesses, tests, records and tells the truth - Not once does he say anything about a scientist being a person who wears a white coat and waves a piece of paper from some school. Being a scientist is a state of mind, not a qualification. (Mind you, I think Immanuel Kant beat hime by about 250 years in his Critique of Reason).
Can you tell me whether it's showers or baths that indicate higher self-esteem - I couldn't find any reference to this in any of the links you gave. I prefer baths at home, and showers when I'm travelling - but that might be due to the fact that hotels always seem to have small baths (apart from one I stayed at in Houston which had a massive Jacuzzi - which makes me wonder if people with low self-esteem can't get jobs that allow them to buy houses big enough to fit jacuzzi's - which means the reason for the observation could be a lot more important than the simple existence of the observation.) Remember - correlation and causation are NOT the same thing.
What I did find was that you had said: "MMPI is incredibly accurate and meaningful', and when I visited the link, and read the few pages of text there, the most significant point it made was that the results should not be left to automatic systems or amateurs (or even inexperienced professionals) to interpret unless they know the circumstances of the person taking the test - so the subjective input seems to be the important bit. That sounds a bit like an automated system for identifying objects that need re-organizing which then needs a second optinion to decide whether the objects really need re-organizing.
An interesting observation on the article you mentioned in a later response regarding forecasting - another telling comment, if you read it through to the end, was the commonest reason why forecasts are often so poor: they are based on bad input assumptions. This suggests that any predictive tool for re-organising should at least be based on understanding the connection between symptoms and root causes, and the effects and side effects of the cure. (You might like to ask your IEEE Young Engineer of the year about the risks inherent in feedback control systems, by the way. It's interesting how bad things can get if you try to use the past to change the future in a dynamic system).
Finally - predictive techniques are possible. I can do it, so an AI system could be built to achieve a reasonable level of success. But I can do it because I am not limited to the information embedded in the database. One of the biggest problems that the cost based optimizer has is that all it knows is a crude approximation of the meaning of the data that is embedded in the number-crunching that dbms_stats can do. Things like predictive re-organizations will suffer from the same limitation.
don't put your words in my mouth
Alberto Dell'Era, February 28, 2005 - 7:20 pm UTC
Mr Burleson,
> Yes, that's my point, exactly! Thanks!
No, your point was that the OakTables members self-proclaimed themselves as the "Einsteins of Oracle" by calling themselves "Scientists", and you put forward my review to demonstrate that that's what people believe; "strangely" you chopped the review when i was actually defining the meaning of "Scientist" as "He who follows the Scientific method", which is exactly their intended meaning.
In IT, we use our own vocabulary and metaphors, surely taken from real life but with a different meaning since they are used in a different context; or do you actually believe that all the "Evangelists" out there believe to be prophets ?
I have a dream
Jim, February 28, 2005 - 8:09 pm UTC
I have a dream to stop the reorg myths.
I'm tired of dealing with DBA's looking for a job who think the answer to everything is:
A. Put it all in one extent.
B. Rebuild indexes regularly.
C. Reorg tables to take care of fragmentation.
D. Buy vendor's tools for large dinero to do step C "online".
(Okay, there is some hyperbole in the above)
I want DBAs who get hired that understand the system and how it works. I want DBAs who can demonstrate and use scientific method to diagnose root causes and fix them. I want lazy DBAs, who aren't going to burn all their time reorganizing data, but instead look for more useful tasks; tasks that actually benefit the business. Until one has a deep understanding of a system and good scientific skills, they are going by rote alone. I want those people asking me if I want fries with that not managing my databases.
Those that have an understanding of such things and can diagnose will go far no matter what vendor's RDBMS they use.
Feynman was asked once to give a lecture on Quantum Physics to freshman. He thought about it for a couple of days and (at the time) said "I can't, we don't know it well enough to explain it to them." To me that is a touchstone, if you can't explain it to another interested party, who is not intimately familiar with the topic, then you don't know enough about the topic.(yet)
So thanks for addressing the reorg myth. It is too often a cop out for not looking at root cause and fixing the problem going forward. (and if reorging was the answer I would write a pl/sql package to reorg and use dbms_job to auto run the damn thing; that is more fun than doing that scutt work if I don't have to do it myself.)
Useless precept! Sorry Don.
Nilendu, February 28, 2005 - 8:13 pm UTC
Don claims something like "a fast-full scan will always be faster after the rebuild". (Tom also agrees) Yes, and so what?
Most of the OLTPs don't even use index fast full scan for most of the important queries. In fact, Oracle Apps, for example, advises to put fast_full_scan_enabled to FALSE.
Refer to this document -
</code>
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216205.1 <code>
So? There goes the importance of Tom (and us!) saying that it's really easy to say anything and claim it to be "see it all the time". But every application is different. Every design is different. Even if something "worked for me in most of the times in past", it should really be verified and validated against the particular system it's gonna be run against. And certainly so before proclaiming it's like the absolute truth. An inaccurate statement in some context may indeed be true in some other, but that doesn't give us any right to make it a "truth". It can, then, cause a lot of harm, if followed blindly.
February 28, 2005 - 8:24 pm UTC
I'm still very curious as to how to observe freelists going out of balance constantly -- really I am, I'm concerned it is an issue I have in my own database but don't even know to analyze it short of rolling about in 60 pounds of hex dumps that are block dumps
(yes, prepare cheek, insert tongue, very tongue in cheek -- but he did say
I see unbalanced freelist lengths it all the time . . .
I just want to know "how" to see them too. But alas, we haven't seen that...)
Freelists again
Tamilselvan, February 28, 2005 - 9:16 pm UTC
<Quote>
I'm still very curious as to how to observe freelists going out of balance
constantly -- really I am, I'm concerned it is an issue I have in my own
database but don't even know to analyze it short of rolling about in 60 pounds
of hex dumps that are block dumps
</Quote)
Tom, Few months ago I had tested and benchmarked DML operations on a table with multiple freelists and one freelist group, and 2 freelist groups on DMT, LMT with segment space management manual and also ASSM.
What I found was: Only when multiple freelist groups are defined, then some of the freed blocks after deletion are linked to the 2nd freelist groups which never get reused on a single instance.
How ever, I had not observed any unbalanced freelists.
Tamil
March 01, 2005 - 8:07 am UTC
I know, I'm asking Don -- since he "sees them all of the time", it has been quiet on that front.
Bob B, February 28, 2005 - 9:59 pm UTC
This is a fascinating thread. Its great to see so many well-known professionals providing input on this forum. I'm a little disappointed in some of the barbs against Mr. Burleson. While I agree there are some flaws in some of his arguments, I do respect his input. In fact, the input of all the professionals on this thread has been greatly appreciated. Thanks!
One metric that I've been wanting to examine in regards to a predictive model is Table Size/Index Size (the ratio) vs Time. I surmise that "stable" indexes would plot as straight lines. A plot with a positive slope (i.e. increasing) would probably be a candidate for a rebiuld to a larger size. A plot with a negative slope (i.e. decreasing) would probably be a candidate for a rebuild to compact the index.
I will also echo one of Mr. Burlesons more interesting queries. What additional diagnostic information can we get out of the AWR tables that could add to TKPROF, STATSPACK, and other diagnostic reports?
To all great oracle technologists
A reader, February 28, 2005 - 10:25 pm UTC
Hi Jonathan/Tom/Don/Robert/Mike,
Why don't you guys participate more in this forum, albeit limiting yourselves to technology and not resorting to insults. I guess we all , including yourselves, would have learnt something in the process. For instance, apart from technology, I learnt what not to do in a forum :-)
I think even Tom wouldn't mind participation from you guys and would love to learn something new.
March 01, 2005 - 8:08 am UTC
Jonathan is a frequent participant actually, either by intention (he posts), or by reference (I frequently quote and point to his material)
Slopes
Ajay, February 28, 2005 - 10:35 pm UTC
<quote>
A plot with a positive slope (i.e. increasing)
would probably be a candidate for a rebiuld to a larger size. A plot with a
negative slope (i.e. decreasing) would probably be a candidate for a rebuild to
compact the index.
</quote>
A primary key will have a positive slope that tends to 1. The only negative slope would be with an index on a nullable column if > 50% of the rows added had null values for that column. Given that null values aren't indexed, what would compacting this index give me?
Slopes correction
Ajay, February 28, 2005 - 10:54 pm UTC
Sorry. I misread the post. Table size/Index size vs time would tend to 0 for a primary key. How would the slope ever be negative? That would mean that the index's rate of growth is increasing faster than the tables rate of growth. Can't think of why that would happen?
A few true things
Mike Ault, March 01, 2005 - 12:11 am UTC
- Index fast full scans will run faster after index reorganization whenever the density of the index entries becomes greater.
I don't believe you will argue with this Tom. Yes, the entropy of the index my increase after time, in fact everythings entropy increases with time, the universe tends towards disorder. So does that mean we let it? We do what is easiest?
- Multi-block Index range scans will run far faster when the data blocks are arranged in index-key order.
Again, this isn't saying, rebuild all tables in index key order, it is simply stating that as clustering factor is reduced the efficiency of an index at retrieving data from its underlying table increases. Some tables may benefit from this, many won't. It requires analysis and understanding to know when to use it and when not to.
- Large-table full-table scans will run faster after reorganization when the table has excessive chained or relocated rows, or low block density after massive DML (updates and deletes).
Here folks seem to forget we have the capability to move to large block size tablespaces. A larger blocksize means we might be able to eliminate chained rows and migrated rows in a table reorganization, thus doing what you say we should, fixing the issue instead of just bandaiding it. We have the range of 2 to 32K block sizes, let's use the darn things. Yes, I realize that "that is not what they were designed for" however, they work great for it. A hammer may have been designed to pound nails, but it works equally well at cracking walnuts, breaking rocks or ringing a bell. Think outside the box sometimes.
March 01, 2005 - 8:24 am UTC
Right I won't argue with that fact.
What I have stated is that "Hey, if you have an index, that is inserted/updated row by row randomly, it'll hit a steady state, be fat. Now, the case where this happens is typically what type of system? OTLP. In OLTP how many times per second are your end users FULL SCANNING anything? Hopefully not alot, else you are definitely doing something wrong.
So, it is patently true, but not common. Like I said, it is not "daft" at all to want the modifications to my transactional system to fly (otherwise, why do people ask me how to make a transaction more efficient -- since end user response times are their main concerns). If I index fast full scan some index during a batch process but I modify the data lots during the day....
And entropy this is not, steady state this is. "I generate 10 logs, but after a rebuild I generate 45, 40, 35, ...... until it gets back to 10 for the rest of the month. After the next rebuild 45, ...." Many indexes that would be identified by "look for whitespace" are in fact in a nice steady state and would happily stay that way forever.
so, a physical schema change is what you are suggesting for the last one there? Not a reorg, but a "stop, look, figure out why, then and only then apply correct solution?" Interesting, wonder why I don't see that concept myself. I'll have to ponder that. Nice concept "think about the root cause, understand the root cause, develop solution for root cause, apply solution". I like that.
(but I would not suggest multiple block sizes/tablespaces)
Slopes
Bob B, March 01, 2005 - 12:57 am UTC
<QUOTE>
Sorry. I misread the post. Table size/Index size vs time would tend to 0 for a primary key. How would the slope ever be negative? That would mean that the index's rate of growth is increasing faster than the tables rate of growth. Can't think of why that would happen?
</QUOTE>
Table size/Index size vs time should tend to 0 for "stable" indexes. An example of Table Size/Index Size having a negative slope would be a "stable" index that was just rebuilt. As it tries to get back to its desired size, the index will grow at a faster rate than the table until it reaches equilibrium again.
Of science, sceientists, and predictions....
Mark J. Bobak, March 01, 2005 - 1:32 am UTC
First, I say up front, I am a member of the OakTable
Network, and I'm proud to be associated with it and it's
members.
As to "science" and "scientists", well, I think the whole
point is using the scientific method to investigate,
hypothesize, prove, disprove, and publish your results.
When you do that, the community gains. When one espouses
opinions as facts, myths are born. This damages the
community. We end up expending vast resources trying to
kill and dispel myths. (Just look how many hours have
been spent on just this thread, between all the people who
have written here, and the many, many, many more who have
read everything written here.) Now, noone is perfect,
and sometimes people are misquoted or misunderstood, and
myths are still born or renewed. But, if you publish a
statement, and along with it you demonstrate the concept
with a test case, then it's that much more difficult
for a myth to be born or propagated It's very simple,
for example, to demonstrate that no matter how many times
you rebuild an index, the clustering factor will never,
ever change. To have an effect on the clustering factor
of an index, you MUST change the physical order of data
in the table or the order of the columns in the index.
(If you don't believe me, ask me for a test case, I'd be
happy to provide one. ;-))
It's hard work, backing everything you write with a
demo or proof. And even after the hard work is done,
someone may criticize and/or find fault with your work.
But again, ultimately, we all gain when this happens.
It's a tough break when someone fills your proof full of
holes, but sometimes, that's the way it goes. I'm
reminded of an episode of CSI, (for those not in the US
or not familiar, it's a weekly TV series about forensic
scientists, and crimes they investigate) where one of
the characters says to the brilliant leading scientist,
"Grissom, you were wrong?" Grissom replies "Yes, I guess
so." and goes non-chalantly about his work. The guy replies
"But, you worked so hard on that case, doesn't that bother
you?" To which Grissom replies "I don't mind being wrong,
it's how I get to being right." (Paraphrased, but you get
the idea.)
That's really the essence of the OakTable, in my view.
It's about testing and proving and disproving and
challenging each other to be accurate. And then Oracle
keeps changing and evolving, and something true today on
version 9.2.0.6 may not be true tomorrow on 10.2.x.
So, it's a never ending challenge of adding to the community
knowledge bank and also re-validating the old facts to see
if they still hold true. And one day, all the myths will
be stomped out, and all will be right with the world. ;-)
Mark J. Bobak
mark@bobak.net
It never stops, does it?
Howard J. Rogers, March 01, 2005 - 5:57 am UTC
Mike Ault wrote: "Again, this isn't saying, rebuild all tables in index key order, it is simply stating that as clustering factor is reduced..."
Right. And just how exactly do you propose to reduce the clustering factor *except* by rebuilding tables? You do still remember the lesson you choked on about what the clustering factor actually represents, don't you?
And pray tell: when a table has more than one index on it, which index key order does the table get rebuilt to use? And what happens to the clustering factor on the other indexes when you do?
"the efficiency of an index at retrieving data from its underlying table increases."
Ah. We're into gross over-simplification territory, I see. Your statement is not true in general. The efficiency of ONE index's retrieval of data MIGHT improve if the clustering factor gets lower due to table re-organisation. But the efficiency of OTHER indexes will not, since a table can only ever be re-organised in one ordering, and multiple indexes will have different orderings.
Pushing the clustering factor as some kind of performance tuning touchstone is weak, Mike. There are far more meaningful things to worry about, like the use of key compression and whether skip scanning is capable of happening. And those things are in the DBA's direct and (relatively) easy control.
And again:"Here folks seem to forget we have the capability to move to large block size tablespaces."
No we don't, Mike. Not unless we are using Direct I/O. You always seem to miss that minor detail out.
"A larger blocksize means we might be able to eliminate chained rows and migrated rows in a table reorganization, thus doing what you say we should, fixing the issue instead of just bandaiding it."
No, your approach is not a fix, but is entirely bandaids. The odd-sized buffer caches don't have keep or recycle pools. Nor are they self-managing in 10g. So you can call that a fix if you want, but I rather think re-creating a database that uses the 'right' block size from the get-go is a better approach. Assuming you aren't constrained in your choice of blocksize by the File System company you keep, of course.
And yet again: "We have the range of 2 to 32K block sizes, let's use the darn things."
No, we don't necessarily have that range at all. We have what our file systems permit us in the first place; and since you and Don seem to put so much faith in the AWR, surely you'd object to losing the self-tuning capabilities that employing these odd block sizes would bring in its wake?
Oh dear: "Yes, I realize that 'that is not what
they were designed for' however, they work great for it."
No, they don't, in general. They might or they might not. There are far too many other factors to take into account before that sort of sweeping statement can be made with any real justification.
I must say: I've been gobsmacked by this entire thread. The two arch-purveyors of the OVER-simplification just keep on going, with barely a pause for breath, even as their folly is pointed out to them. I wouldn't have believed it could be so except I read it here with my own eyes. I just wonder: Do the pair of you never stop to consider the sort of technical sloppiness that has attended practically your every word in this forum? Is there never a time when either of you pause for reflection, and think, 'ooops'? Will the day *ever* come where we see some small inkling of humility or the acknowledgement of some token degree of embarrasment at the misinformation you two have been pumping out of late?
Isn't science about reproducible results?
Peter, March 01, 2005 - 7:41 am UTC
Above Don comments:
<Quote>
Now, I don't pretend to be a scientist here, but I do know that "reproducable results" are NOT required for forecasting and modeling.
<quote?
Then cites an incomplete (and to me, out of conext) snippet from the following Norwegian web page on ECONOMIC forecasting: </code>
http://folk.uio.no/rnymoen/forecast_background.htm <code>
However reading the paper in full you will see that the authors look to improve the quality of their forecasts by using a structured, statistical approach. They also discuss reasons behind forecast failures. Nowhere to the authors seriously postulate guessing. A more appropriate quote from the site would be:
<Quote>
Today, it goes without saying that extensive empirical evaluation should be an integral part of the model building process.
</Quote>
On the way to work today I constructed a forecast model to allow me to determine which shirt to wear, based on the amount of fuel in my car - so today it beach wear... shame it is snowing.
Reproducible results
Ajay, March 01, 2005 - 9:13 am UTC
I think Peter is dead on. You could calculate the correlation between Peter's shirt and the amount of fuel in his car but, without a hypothesis, without first principles, its just so much more noise.
The same thing applies to old war stories and predictions about Oracle tuning strategies. "I've seen it happen" is superstition. "I know how/why it happens and I can show you" is science.
Truth and True Science vs. flim-flam politics!
Robert, March 01, 2005 - 9:56 am UTC
Thanks, Tom!
Tom, you should get the Nobel prize for patience if not for your Oracle skills.
A reader, March 01, 2005 - 10:51 am UTC
Interesting thread
A reader, March 01, 2005 - 1:27 pm UTC
I had a few comments come to mind while catching up on this thread.
My first thought was "Wow, look at all the miscommunication going on." One of the nice things about the internet is that it's so easy to link to things, and have a great amount of discussion. However, that's one of the biggest failings as well. It's much harder to read people's body language when you don't see the body. :-) Unfortunately, not much can be done, except to be as clear and concise as possible, and wish for the best.
My second thought was that we have two arguments on different sides of the same coin. One one hand, we have Tom's approach, "find the underlying problem and fix the design," and on the other, we have Don's approach, "find out what's most impacting users *right now* and fix it."
Neither approach should be discounted. In fact, I would bet that most of us have used both approaches at times. For instance (though I have to be vague for the time being, my apoligies), just a month or so ago we had a situation with one process, which was just going way too slow. We had a very limited amount of time to fix the problem, otherwise we would see quite an increase in CS calls and customer complaints, as well as the lost of image and actual money.
We obviously had to take the approach of how to get around the problem as quick as possible, and did so. Our solution at the time was to add a hint to several queries. This, of course, worked for the time being, but was not a good permanent solution, not the *right* solution. So, we continued our analysis. Perhaps two days later, we had found the problem, made a new function-based index, and altered our queries so that they would use this instead. We also learned more about how the system *actually* works versus how we thought it would work, and took notes.
Finally, one other thought that occurred to me during this discussion, specifically some things that Don said on a linked page, was "What does a degree or certification have to do with the truth?" I have a degree from a decent college (in my field, oddly enough), and I know people with degrees from more and less prestigious colleges, as well as people without a degree at all. I have found that these people in all cases are quite capable of thinking well and figuring out the truth. I've also found that some people with the same set of credentials are pathetic, and can't think for themselves to save their lives. I learned to think on my own, as did everyone I hold respect for. I find that introducing yourself by your credentials is offensive and tasteless, and that credentials are never a substitute for hard work and good research. If you want to put *that* in your introduction, go for it.
Thank you all for your hard work and interesting comments and insight (I mean everyone).
weak corallation between higher learning and capability
Jim, March 01, 2005 - 1:52 pm UTC
I'm reminded of the old joke "You can always tell a Harvard man, but you can't tell him much."
One of my bosses in a prior job was a highly regarded Actuary. He did not graduate from college. Should I have thought less of him? Phewy! He was witty, smart, and while attending college found to become an Actuary you had to pass a series of difficult exams. He started taking them; left college and finished the exams. On average it takes 7 years to pass all your Actuarial Exams - for those that pass all of them.
What about Dave Thomas who founded the fast food chain Wendys? It wasn't until close to the end of his life he went back and graduated high school. (He said he had to to encourage his daughter to stay in school.)
The mettle of a person is who they are and not what school they went to. Conversley, I am not pooh-poohing those people who attend excellent schools (MIT, Oxford, to name a few). They have attended excellent schools, but what is important is what they do with that oppertunity, what type of people are they.
Attending or not attending a prestigous institution doesn't determine what type of person you are. What you do with your life; what you do with your talents, how you apply yourself is much more important. Lets not get wrapped up in high society snobbery.
caveats of statistical models ...
Gabe, March 01, 2005 - 2:03 pm UTC
Mr. Burleson asks
<quote>
Knowing that Redwood is building an index rebuilding prediction tool (AMT) as-we-speak, tell us about how it should predict the benefits of an index rebuild
</quote>
and then he proceeds to include [yet another] link to his website.
[In passing, one doesnt need a West Point degree to see through his attack is the best defense antics
but since his insistence on a predictive model for index rebuilds
]
The web article at the end of that link, titled Oracle 10g Predictive Modeling undated [and changing], makes reference to two papers/authors:
1. Oracle Database 10g: The Self-Managing Database by Sushil Kumar,
Nov 2003
2. METRIC BASELINES: DETECTING AND EXPLAINING PERFORMANCE EVENTS IN EM 10GR2 by John Beresniewicz, Feb 9-10 RMOUG
To first properly define the terms, what could this predictive model hes referring to be?
i. A tool based on rules/percepts/postulates
RBO is such a tool and we all know what could happen when postulating an index access is better than a full table scan. The percepts he put forth have already been addressed. The quality of such tool is directly related to the quality of the underlying postulates.
ii. A statistical predictive model (a math formula really) developed using data mining techniques.
I believe it is safe to assume he is talking about statistical predictive models
there is another related article he also linked to, titled Linear Regression modeling for Oracle database undated [and changing], which strengthens this assumption.
[In passing, the following statement in that paper <quote>Virtually every predictive model in Oracle software uses the database to create the predictive model</quote> is of dubious value
but the fact he links that to the Oracles data mining offerings suggests he confuses the existence of a tool implementing various mining algorithms with the existence of statistical models which are the results of applying the tool to representative sample data.]
Going back to his article referencing Mr. Kumar and Mr. Beresniewicz
There is nothing in paper #2 to warrant one to conclude anything about predictive reorganization of tables or indexes.
To use an analogy
a bank building a high-level KPI system in order to allow for (the quotes bellow are from the Value Propositions slide):
i. <quote>reliable and adaptive performance alerts</quote>
ii. <quote>deeper understanding of macro-level system statistics>
[emphasis on macro and system maybe appropriate]
which may lead to
iii. <quote>improved manageability</quote>
iv. <quote>competitive advantage</quote>
does not implicitly provide for a mechanism to appropriately manage individual bank customers (they build customer-level statistical models for that)
one can indeed do trend analysis at macro/system level and build predictive models
deriving information from the aggregate level to the individual is, well, perilous. Or maybe another analogy
having aggregated historical data for the S&P index may allow for forecasting the index itself but not for every component stock of the index. But since he had talked with Mr. Beresniewicz (he seem to have quoted him in the web article too) maybe he can get confirmation that macro-level system statistics in EM 10GR2 will just do the trick on predicting index reorgs .. or well just wait for the release. To me, it looks like Mr. Burleson has taken that fine paper way out of context.
Regarding paper #1
the web article says:
<quote>In an OracleWorld 2003 presentation titled Oracle Database 10g: The Self-Managing Database by Sushil Kumar of Oracle Corporation, Kumar states that the Automatic Maintenance Tasks (AMT) Oracle10g feature will automatically detect and re-build sub-optimal indexes.</quote>
[In passing, there is an end quote there
but where is the beginning?
luckily, we can find the same thing in yet another article on the theme, the one linked in the original question in this thread].
I couldnt find in that article Mr. Kumar stating word-by-word that [AMT will] automatically detect and re-build sub-optimal indexes (why bother quoting someone if not 100% precise?)
The actual quote from the Automatic Maintenance Tasks section is:
<quote>AWR provides the Oracle Database 10g a very good "knowledge" of how it is being used. By analyzing the information stored in AWR, the database can identify the need of performing routine maintenance tasks, such as optimizer statistics refresh, rebuilding indexes, etc. The Automated Maintenance Tasks infrastructure enables the Oracle Database to automatically perform those operations.</quote>
From the can identify the need of performing ... rebuilding indexes to the existence of a statistical predictive model there is a bit of a stretch. It may be just wishful thinking at work here or Mr. Burleson has some knowledge from other sources he omitted to include.
Mr. Kumar proceeds to describe in his paper, that ADDM uses a <quote>sophisticated problem classification tree</quote>
and classification trees are not predictive models. Without taking anything from Mr. Kumar, I found the article he refers to on page 13 even more interesting
</code>
http://download-east.oracle.com/owparis_2003/50092_dias.doc <code>
... and again no mention or hint of the existence of statistical models.
True, the existence of AWR time-series data does allow for trending and could act as a backdrop for identifying quantitative effects of various changes (including index reorgs)
true, the ADDM and related advisers could employ the problem classification tree to suggest an index reorg
but where are the statistical models for predicting the benefits?
The fact that AWR has a wealth of time-series performance data does not mean reliable statistical models [for the index reorg problem] could be derived from it. To use again the bank analogy
my bank has historical record of absolutely all my transactions across all various accounts I have with them
but that is hardly enough for them to predict if I will default on my bank mortgage over the course of next 9 months
to be able to develop such a statistical model there is a need, not just for detailed transaction data, but also for a large sample population
few hundred or even thousand of indexes in a database is hardly enough. I believe this is the exact same point Jonathan Lewis made [though much more eloquently] when he said:
<quote>Finally - predictive techniques are possible. I can do it, so an AI system could be built to achieve a reasonable level of success. But I can do it because I am not limited to the information embedded in the database. One of the biggest problems that the cost based optimizer has is that all it knows is a crude approximation of the meaning of the data that is embedded in the number-crunching that dbms_stats can do. Things like predictive re-organizations will suffer from the same limitation.</quote>
[To Mr. Lewis: if I misunderstood/misrepresented your statement, I do apologize]
In conclusion, some of the caveats with developing statistical models for predicting the benefits of an index rebuild:
A. Small sample data
B. Highly unlikely the model can be derived from macro-level system statistics
C. There is also the issue of self-tuning statistical models
I believe Mr. Lewis has raised this one as well
So, it is not clear if Redwood is building an index rebuilding prediction tool (AMT) as-we-speak
one based on self-sustainable statistical models seems highly unlikely
a suggestion for an index rebuild at the end of a branch of the ADDM classification tree, showing through one of the advisors is quite possible. And that still leaves the DBA/developer with the need to grasp the science behind index rebuilds rather than resorting to somebody elses precepts.
On a different topic
just one small correction for rajXesh in his review In defense of the Don [yeah, hes one of a kind]
You said <quote>Randomly delete ~10% of the rows and then add 10% back.</quote>
Since there is a 1 in 5 chance that a number between 1 and 500000 made it into the table, that del_rows(10000) actually randomly deletes ~2% and then add_rows(10000) adds exactly 10000. For what you intended to show all that was not even necessary
just reorg after loading the original 100000 rows and measure.
HJR from bad to worse
mike Ault, March 01, 2005 - 4:06 pm UTC
Clustering factor is just one measurement, it is the measure of how well the table and index are ordered in relationship to each other. One way of improving an indexes CF (notice the singular usage here HJR) is to either re-order the underlying table, which is only applicable in very specific circumstances, or, in the case of a concatenated index, re-order the column order in the index. Re-ordering the column order will require re-ordering of the SQL's that use the index, where clauses in many cases. Show me how reordering the where clause will change the results (as someone else commented, yet was allowed a get-out-of-jail-free card since they were bashing me)re-ordering of where clauses is a tried and true tuning method.
If the table is not ordered in any way, how will re-ording it by one of the indexes truely harm the other indexes? Some will get worse CF, others will get better CF, overall, perhaps a wash, however, it is an operaiton that would require repeats for busy tables (back to my caveat that it is only for specific tables whose usage is well understood).
No where have I said these things must be done, should be done or have to be done, in fact I have gone out of my way to state they are specific tools for specific situations and should only be applied after careful study.
It is only the ones who love to take things out of context that seem to have issues with this. IN one psych class they were called type 4 individuals that must have a rule for everything, that can't understand shades of gray and get rather upset when anyone says something that isn't completely black or white. Therefore they tend to isolate what they believe are rules:
"Mike says re-order all concatenated indexes to reduce CF"
"Mike says re-order all tables to key order"
etc. etc. etc.
When in fact it is not what I have said at all. HJR seems to believe indexes prosper from benign neglect, reaching an equilibrium that will perform well in every case. He also seems to believe that you should tune indexes for the minority transactions (insert, update and delete) with the same weight as the majority transaction (select). All the power to him. I don't believe in it myself. I believe in tuning for the majority transaction. Maybe, on some rare occasions that is insert update or delete, but so far, the exceptions have been rare (for example, a database to store telephone switch data where the majority transaction was INSERT, no update, delete and only rare select).
If someone is indexing frequently altrered columns, maybe instead of punsihing everyone it is time to look at the application, perhaps its logic is incorrect.
Anyway, I believe the topic to be exhausted. Go ahead and bash away HJR, it's what you do best.
Mike
March 01, 2005 - 4:18 pm UTC
but Mike, in order to:
<quote>
re-order the column order in the index.
</quote>
You have to change many of the questions you ask. CHANGE the question, not "require re-ordering of the SQL's that use the index, where clauses in many cases" whatever that means.
Change the questions you ask -- you have to change the set of predicates you use, which typically is in the realm of "nope, people want to ask the questions they ask, not the questions you want them to ask"
I mean, hey, if I query:
where a = :a
where a = :a and b = :b
where a = :a and b = :b and c = :c
what "reordering of the SQL's where clause" is going to let me use an index other than (a,b,c) -- (forgeting index skip scans for a minute because I'm very curious on this "reorder the sql's where clause" trick).
Lets say we discover the table is naturally sorted by B,C,A
what "re-ordering of a SQL's where clause" will let the above three queries use this rebuilt index on (B,C,A)???
where a = :a
seems to be left high and dry?
I have no idea how the reordering of predicates in a where clause would change the use of an index (well, except for the RBO but that is another story all together since the RBO is clustering factor "unaware")
The order of columns in a concatenated index is derived the from questions asked of the data.
<quote>
Show me how reordering the where clause will
change the results (as someone else commented, yet was allowed a
get-out-of-jail-free card since they were bashing me)re-ordering of where
clauses is a tried and true tuning method.
</quote>
no one got a get out of jail free card. I stated:
...
I would really like to hear more about this? As the clustering factor could
definitely change but the query would have to CHANGE the question it is asking
as well.
........
you never followed up (still curious what the default array size is too, but this is much more important).
"re-ordering of where clauses" -- I don't know what you even mean by that? Please -- example, simple example -- show us a query before and after "re-ordering the where clause".
Tiny example -- just what it looks like conceptually. To me, this tried and true tuning method (which sounds like an RBO precept from days gone by) is meaningless. Or I'm totally missing what you mean.
I'd like to see a tiny example to illustrate your point, you don't need to set up a test case -- just show us the query and index you might have "before and after" reordering "the where clause"
And -- if you have Don's ear, please -- I am dying to understand how he measures unbalanced freelists that he see so often. I may well be suffering from it in my databases, I just want to understand how to see it? how to detect it? Maybe he missed the repeated requestss above -- he has written about the phenomena in other papers of his, so it must be true -- just curious how he detects it is all
(for anyone who didn't read all of the details above that last part is somewhat tongue in cheek, the thing "seen all of the time" doesn't actually happen with freelists)
To HJR: Multiple block sizes
VA, March 01, 2005 - 4:21 pm UTC
<quote>
No we don't, Mike. Not unless we are using Direct I/O. You always seem to miss that minor detail out.
<quote>
Could you please explain what direct i/o has to with multiple blocksizes in 1 database?
<quote>
No, we don't necessarily have that range at all. We have what our file systems permit us in the first place; and
<quote>
Assuming Unix, most UFS filesytems have a 512-byte blocksize. All nK blocksizes are a multiple of this. So, why do you say that we are limited by what our filesystems permit us? Assuming I setup the buffer caches for them, when would I be unable to create any tablespace using the 2-32K range?
<quote>
since you and Don seem to put so much faith in the AWR, surely you'd object to losing the self-tuning capabilities that employing these odd block sizes would bring in its wake?
<quote>
Why would using these odd block sizes make one lose the self-tuning capabilities of 10g/AWR, etc?
Thanks
Block Size aside
Niall Litchfield, March 01, 2005 - 4:51 pm UTC
=========
Could you please explain what direct i/o has to with multiple blocksizes in 1
database?
=========
DirectIO, or indeed RAW allow Oracle to read and write directly to the hardware, otherwise Oracle will issue read calls that get translated by the OS to filesystem read calls. It follows that with directIO or RAW a read of an Oracle block is a read request for that blocksize worth of data, with a file system it will be a number of reads.
============
<quote>
No, we don't necessarily have that range at all. We have what our file systems
permit us in the first place; and
<quote>
Assuming Unix, most UFS filesytems have a 512-byte blocksize. All nK blocksizes
are a multiple of this. So, why do you say that we are limited by what our
filesystems permit us? Assuming I setup the buffer caches for them, when would I
be unable to create any tablespace using the 2-32K range?
================
You aren't unable to create them, there is just little point in so doing.
================
<quote>
since you and Don seem to put so much faith in the AWR, surely you'd object to
losing the self-tuning capabilities that employing these odd block sizes would
bring in its wake?
<quote>
Why would using these odd block sizes make one lose the self-tuning capabilities
of 10g/AWR, etc?
====================
Because the dynamic buffer pool resizing etc that is introduced in 10gR1 just doesn't apply to these pools. You have to size them manually. I'd bet money if I were a betting man that this will be relaxed in a later Oracle version.
Quotes
Niall Litchfield, March 01, 2005 - 4:52 pm UTC
I wish I could quote earlier replies better (or use HTML code or some subset of it).
March 01, 2005 - 5:14 pm UTC
sorry, all you get are <pound sign>B and <pound sign>b for <bold> and </bold> when you feel like shouting :)
testing
A reader, March 01, 2005 - 5:24 pm UTC
thanks
regards
Thanks Niall
A reader, March 01, 2005 - 5:29 pm UTC
Thanks for jumping in Niall and sorry if I am being dense, but I still dont get it...
Mike said "we have the capability to move to large block size tablespaces."
You said "...with a file system it will be a number of reads..."
So what? What does that mean to the the ability to move to large block sizes tablespaces? For a UFS filesystem where the blocksize is 512-bytes, even a "standard" 8K blocksize tablespace can be called "large block size", right? So whats the difference?
"You aren't unable to create them, there is just little point in so doing"
Why is that?
Thanks
More on IO
Niall Litchfield, March 02, 2005 - 1:35 am UTC
Reply to Mr. Burleson on answering my question
Bill Schwartz, March 02, 2005 - 10:14 am UTC
Mr. Burleson, apologies for jumping in late but was not in the office on 03/01/2005 and did not have an opportunity to read your response until this morning.
You said and I quote you here:
"Of course, there are notable exceptions, but lets face it, it the world today
you often dont get a chance to get to know people, and a resume provides you
an excellent insight into the mind-set and experience of the stranger at the
other-end of the web. . ."
I must take exception to this. A resume provides you with nothing but a bunch of words. Let's face it, in today's world a resume is no better than the person who composed it. It may be factual, it may be half-truths. There is NO WAY to tell that over the web. Only in a face-to-face can you get comfortable with what was written (which is why most ((stress MOST)) employers require face-to-face interviews whenever possible).
I can be Oracle OCP certified. Does that make me an "expert DBA"? Of course not. No more than my possessing a college degree makes me CEO material. It merely means that I made it through 4 years of college with passing grades. So when I am reading material presented by someone who is an "expert" the first thing I look for is the validity of their arguments, and whether or not they can back up what they say with examples based in fact. Also, I look for what their peers have to say on the subject as well. Anyone can call themselves an expert - I ignore that, as I ignore people's "Curriculum Vitae" as you put it. If someone has something valid to present, and can use example to present it so I can try it for myself (reproducible results if you will) that is worth more to me than any certificate.
Thanks for the response, and Tom sorry to get off-topic with this. I thought that it was important enough to get some clarification from Mr. Burleson.
There is a great deal of useful information here but...
Mike Jones, March 02, 2005 - 4:57 pm UTC
There is a great deal of useful information in this thread, but gleaning it out requires wading through some less-than-helpful prose, once you get past the initial well-stated post and reply. Not that I'm complaining. A good bit was very entertaining; sort of like watching celebrity death match. Jonathan Lewis and Conner as always add great value. Maybe some kind-hearted soul will summarize the discussion to a pros and cons type of table, for those who don't know Tom, Don, Mike, etc.
Refocus
Mike Ault, March 03, 2005 - 12:53 pm UTC
Since we seem to have gotten lost in religious wars, let's refocus on the topic.
Given that certain types of indexes will benefit from periodic coalesces or perhaps full rebuilds (coalesces are less costly) then:
What is a good metric for determining when the following types of indexes may benefit from rebuilding:
1. So called right-handed indexes (sequence or date based, monotonically increasing with delete activity)
2. Bitmap indexes that are present on INSERT, UPDTATE and DELETE activity columns (yes I know they weren't designed for that, however, sometimes they are of great use here)
3. Indexes suffering from excessive white space (yes I know, some feel these should never be coalesced or rebuilt, but if the index is larger than the table, I am afraid I am going to consider it.)
Now, notice I didn't say "scheduled" coalesces or rebuilds, I didn't say, "rebuild all indexes" and heaven forbid, not a precept in sight. Just seeking information from the folks at large on how they would determine when (or if) the above types of indexes would benefit from a coalesce or rebuild.
Also, assuming that the dependent SQL can be rewitten, or additional indexes to support some of the dependent SQL will not overly tax the system (ie, majority SELECT activity, or SELECT is the transaction type that needs to be optimized), is there a case for re-ordering the columns in a concatenated index to reduce the CF and improve its retrieval efficeincy?
March 03, 2005 - 1:13 pm UTC
Mike, I'd still love Don to address this freelists thing, forget religion for a minute, come on -- he said
<quote>
>> You opined: Given that freelists cannot be unbalanced, I'm not sure what to
make of this.
Huh? Then why does Oracle have the dbms_repair.rebuild_freelists procedure?
When you have multiple freelists (say 50 per table), the freelists are
independent from each other, and Oracle cannot share blocks between freelists. A
non-parallelized delete job will populate only one of the 50 freelists, leaving
a table with loads of free blocks that only one of the 50 concurrent updates can
use.
...
I see unbalanced freelist lengths it all the time . . . </quote>
We all just want to know: HOW, how do you see it all of the time. He has written this before too -- and whats up with dba_free_space? Something is just not making *any sense* here.
But, if he shares the magic to see unbalanced freelists, that'd clear it all right up. (unless he doesn't really see it all of the time and maybe said that to try and beef up something)
1) that seems more time based, after a period of activity, more than anything. But if you wanted to measure, you'd be looking for a query that finds the "oldest non-processed row" -- that started range scanning from the right hand side to find the first row of interest. You would see the LIO's on that query raise over time:
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2913600659112#24034747883219 <code>
shows examples of that
2) well, frequently is the answer to that. When they grow, which will be almost immediately. (assuming you mean transactional insert/update/delete -- which I've not really seen too much of, only when people say "hey, oracle deadlocks like mad" -- then I say "bitmaps?")
3) define excessive first and define what types of queries are to be run on this index and tell us why they are fast full scanning this index (because if they are not FFS, no, fluffy indexes in a steady state don't need this)
sure there is a case for reordering columns in indexes -- I just cannot understand how reordering predicates in a where clause would accomplish this (from past comment above about reordering predicates)
Even I have a job to rebuild my text index, about every other month.... But that is very much like inverted list index with a bit of a bitmap. It tends to get larger than it needs and a rebuild provides measurable decrease in IO on retrieval and decreased work on insert (best of both worlds, rare but can happen)
Answer and how to build the model
Anjo Kolk, March 03, 2005 - 3:45 pm UTC
Like a couple of other people who have contributed to this thread, I would like to get the answers on the questions that Tom never got answered by Don. So please Don, answer them.
<may be off topic>
Now to get back to predictive model that Don was proposing. I have been looking at the Logical I/O(LIO) type at the block level and by sampling them constantly one can actually find why an LIO was done. One can identify LIOs for space management, transaction management, index scans etc. So by checking the type of LIO per SQL statement and keeping an history of them one can see if LIOs for spcmgmt are going up, or if LIOs for index scans go up. Actually with some magic, one can find the object the LIOs where done on. So for example an analyze may find chained rows, but if no query access the data in the chained part, who cares about the chaining? But many DBAs may actually see the number of chained rows and may decide to recreate the object and see no real improvement .....
So I think that with the current information that is available in Oracle (V$ and X$)we can't really built an effective model. We need LIOs per type per SQL statement so that we can which statement is accessing which object and why. Then keep the info over time and we can see some interesting trends and predict what will happen in the future ....
So I don't think that the current statistics are really suited to build the predictive model that Don was suggesting. We may need a complete new approach looking at the LIO type to see what was really done by Oracle ....
</may be off topic>
David Aldridge, March 03, 2005 - 6:46 pm UTC
>> but has been used by someone else
Name and shame, Tom, name and shame!
We could all do with a laugh.
March 03, 2005 - 7:20 pm UTC
in the days of ISP's and dhcp and all -- I'd rather not -- too many chances for error, I seriously doubt it was Don or Mike though.
Correct inaccuracy in something I said
Mike Ault, March 04, 2005 - 11:34 am UTC
In my posting where I said you need to reorder predicates in order to use a reordered concatenated index, I was wrong, you don't have to re-order the predicates in the SQL statements that use the concatenated indexes. This seems to indicate that re-ordering the columns to reduce clustering factor in an index is an even better idea than I thought.
March 04, 2005 - 1:13 pm UTC
you mean even worse idea than you thought.
<b>disclaimer, nothing in this followup here means, infers, implies that the converse of what I'm saying is not true, I can come up with examples that show that reordering of the columns in a concatenated index can in fact improve performance -- but it would be the rarest of all cases where the clustering fact was the driving reason. HOW the data is used is paramount. Your DOMAIN KNOWLEDGE of the data is second (what you know about the data itself). The clustering factor -- if considered -- would be down my list</b>
for the thing that drives the selection of the order of the columns in a concatenated index is......
mostly the questions you ask of the data.
Unless you want to go back to the drawing board that is and come up with a complete new set of indices to go with your nicely clustered index that is.
Take the simple example:
select * from T where ..... (t copy of all objects)
where owner = :x (show me all of SCOTT's objects)
where owner = :x and object_type = :y (show me SCOTT's tables)
where owner = :x and object_type = :y and object_name = :z (show scotts emp table)
you can either have
a) one index on (owner,object_type,object_name) to satisfy all three.
or say you discovered the table was 'sorted' by object_name, object_type, owner -- you created the index in that order, you would need:
b) index on (owner,object_type)
index on (object_name,object_type,owner)
To maintain the performance of the first two queries and make the third (which at this point is actually pretty much a single row fetch and hence clustering factor doesn't really matter at all but we'll ignore that) have a better clustering factor.
(hopefully that is obvious to all or most -- that you need two indexes in order to even come a tiny bit close to the performance of the first ordering).
Or, say you have a predicate as simple as:
where a = :a and b > :b and c = :c
and there is an index on (a,b). You discover the table is sorted by B,A however. What can happen to us?
ops$tkyte@ORA9IR2> create table t ( a int, b date, c int, d char(50) );
Table created.
ops$tkyte@ORA9IR2> insert into t
2 select dbms_random.value(0,50), sysdate-10000+rownum, dbms_random.value(0,1000), 'x'
3 from all_objects;
27982 rows created.
<b>Now, since B is unique, this table is in fact "pretty much sorted by B, A" -- we'll see that with the cluster factor in a moment..</b>
ops$tkyte@ORA9IR2> create index t_idx on t(a,b);
Index created.
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> select *
2 from ( select num_rows, blocks from user_tables where table_name = 'T'),
3 ( select clustering_factor from user_indexes where index_name = 'T_IDX' );
NUM_ROWS BLOCKS CLUSTERING_FACTOR
---------- ---------- -----------------
27982 275 12053
<b>the closer to the number of rows and the further from the number of blocks a clustering fact is, the "worse" it is. This one is far from the number of blocks</b>
ops$tkyte@ORA9IR2> variable a number;
ops$tkyte@ORA9IR2> variable b varchar2(25)
ops$tkyte@ORA9IR2> variable c number
ops$tkyte@ORA9IR2> exec :a := 42;
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec :b := to_char(sysdate,'yyyymmdd')
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec :c := 55;
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> select a,b,c
2 from t bad_cf
3 where a = :a
4 and b > to_date(:b,'yyyymmdd')
5 and c = :c;
no rows selected
ops$tkyte@ORA9IR2> set autotrace traceonly
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
Session altered.
ops$tkyte@ORA9IR2> select a,b,c
2 from t bad_cf
3 where a = :a
4 and b > to_date(:b,'yyyymmdd')
5 and c = :c;
no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=15)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=5 Card=1 Bytes=15)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=5)
<b>Note the cost of using the index, note the number of IO's -- 157</b>
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
157 consistent gets
0 physical reads
0 redo size
315 bytes sent via SQL*Net to client
368 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> @traceoff
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context off';
Session altered.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop index t_idx;
Index dropped.
ops$tkyte@ORA9IR2> create index t_idx on t(b,a);
Index created.
<b>so, we reorder the index columns...</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
2 from ( select num_rows, blocks from user_tables where table_name = 'T'),
3 ( select clustering_factor from user_indexes where index_name = 'T_IDX' );
NUM_ROWS BLOCKS CLUSTERING_FACTOR
---------- ---------- -----------------
27982 275 275
<b>and note that the clustering factor -- well, it just don't get better than that</b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable a number;
ops$tkyte@ORA9IR2> variable b varchar2(25)
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :a := 42;
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec :b := to_char(sysdate,'yyyymmdd')
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select a,b,c
2 from t good_cf
3 where a = :a
4 and b > to_date(:b,'yyyymmdd')
5 and c = :c;
no rows selected
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
Session altered.
ops$tkyte@ORA9IR2> select a,b,c
2 from t good_cf
3 where a = :a
4 and b > to_date(:b,'yyyymmdd')
5 and c = :c;
no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=15)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=3 Card=1 Bytes=15)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=252)
<b>cost of the query went down....but</b>
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
211 consistent gets
<b>the amount of work went up :(</b>
0 physical reads
0 redo size
315 bytes sent via SQL*Net to client
368 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> @traceoff
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context off';
Session altered.
ops$tkyte@ORA9IR2> spool off
<b>
Here, when we could "goto a=42, find b > sysdate", it was easy - all of the a=42's where together and we range scanned a small set of b > sysdate within that. We looked at less then 1/50 of the rows in the table via the index (because a has 50 values -- a=42 would be about 1/50th)
When we could not do that, we had to "goto b > sysdate and for each b you find, see if a isn't 42". Well, that was hard because 65% of the rows satisifed the b>sysdate and we had to look for a=42 in all of them
Might you want to understand the QUESTIONS asked of the data.....
and how the data is used in those questions...... and what the data looks like
before you look at a clustering factor? I would. I surely would.
Because in the real world, we don't have such simple examples -- they are much more complex -- lots of predicates involved. It is not this simple.
</b>
Last Updated in this thread
A reader, March 04, 2005 - 11:44 am UTC
Tom,
I am bit curious about this one. I don't see any changes that you have made to this thread today but in your recent tab its showing the last updated as 04 Mar 2005 11am!
March 04, 2005 - 1:16 pm UTC
people were using other peoples names to post junk, i confirmed that and removed them.
Last comment from "A Reader"
Bill Schwartz, March 04, 2005 - 11:51 am UTC
Some folks were being childish and posting under other people's names. Tom was gracious enough to first leave the posts with a warning, and has now removed them. That is why the timestamp is different.
March 04, 2005 - 1:17 pm UTC
ahh, yes, thanks -- didn't see this before I answered myself ;)
dave, March 04, 2005 - 11:54 am UTC
except the comment is actually still there ......
March 04, 2005 - 1:18 pm UTC
there were others..... I haven't heard from Don or Mike one way or the other on that one, but I'm pretty sure it wasn't them
A reader, March 04, 2005 - 3:11 pm UTC
Hi Tom,
You are really the best, when it comes to explaining things and concepts. (and proving that as well). There is little doubt about that.
And even your book - "Expert one to one" is one of the best oracle book I have read in recent times. It explains each and every aspect in depth and in simple terms(as possible)
best time to post a new question
A reader, March 04, 2005 - 4:14 pm UTC
btw - what is the best time to post a new question in this site ?
I have always found "Sorry I have a large backlog right now, please ask a question later" with the result that you have to always ask your questions as follow ups of some related discussions. But that has not prevented me from getting the answers I am looking for, the response(both in response time and quality) has been simply superb from Tom.
Experience is one thing, but it is still better if you have experience and has got all the concepts right. That way if something works or if performance improves tremendously, you know for sure, it is not a fluke.
btw - I was the one who posted the junk(you were referring to), but I did not seriously mean that, but now looking back, I should not have posted that. Sorry about that
March 04, 2005 - 6:16 pm UTC
depends -- I take new questions all of the time (i travel lots too so timezones from gmt+2 to gmt-8 are common) -- but you have to realize I can take maybe 10-20 on a good day, and there are at least 15,000 distinct IP's hitting this site a week....
A reader, March 04, 2005 - 4:18 pm UTC
And the person in whose name I posted, is a very common name (in that part of the world from where I come from), but the way I put it out was not right, was meant to be a joke ;-).
But I should not have done that
new book Tom....Oracle by Example
denni50, March 04, 2005 - 4:57 pm UTC
No one encompasses the term 'scientist' more than you Tom..and it's not because you proclaim to "know-it-all" but your die-hard mantra to prove,test,bookmark,'experiment,experiment,experiment'.
Day in and day out you prove and show by example, with step by step instructions,clear and concise explanations...with additional follow-ups to show and present your case.
A 'scientist' is someone with an inherent desire and quest to learn and understand something applying principles and processes, formulation and experimentation for validating concepts and observations.
Time and time again you prove your case in black and white.
If there is such a thing as an 'Oracle Scientist' then no one epitomizes that more than you Tom...the others are just 'Igor's'(Dr.Frankenstein's assistant).
(please don't take offense if your name is Igor...no offense is meant).
very interesting debate!
(need to readup on freelists to better understand what the chatter is about).
On evaluating credentials on the web
Donald K. Burleson, March 04, 2005 - 6:28 pm UTC
Bill Schwartz wrote >> You said and I quote you here:
"Of course, there are notable exceptions, but lets face it, it the world today you often dont get a chance to get to know people, and a resume provides you an excellent insight into the mind-set and experience of the stranger at the other-end of the web. . ."
I must take exception to this. A resume provides you with nothing but a bunch of words. Let's face it, in today's world a resume is no better than the person who composed it. It may be factual, it may be half-truths. There is NO WAY to tell that over the web. Only in a face-to-face can you get comfortable with what was written (which is why most ((stress MOST)) employers require face-to-face interviews whenever possible).
Hi Bill,
Yes, thats true, sadly. I see Oracle résumés all-the-time posting degrees from schools they have not attended, listing certifications they do not have, etc. This book comes to-mind:
</code>
http://www.dba-oracle.com/images/dummies_ocp.jpg <code>
Despite the half-truths in our dishonest society, I can verify credentials if I really want to. Universities will verify degrees, and I can do very reliable background checks with tools like www.usasearch.com
But Id like to think that within the Oracle community we can have some self-integrity, and anyone offering expert advice would be kind enough to share their background.
(Tom, Its really me, Im at a client site, using their IP!)
March 04, 2005 - 6:41 pm UTC
Don --
come on name names, give me the names of the people whose credentials you are so desperately in need of that are offering "expert" advice that you do not feel you have. You must have at least one or two -- seems to be some sort of burr under your saddle -- you keep raising this point.
We'll see what we can do to make you feel more comfortable that the advice they give you scripts, demos, technical information for -- that you can reproduce, see for yourself -- are in fact sound.
Since seeing is not believing for some -- we'll see what we can do to ensure that the people proving things, showing how it works -- graduated from excellent high schools at the very least.
So -- who is/are the "expert(s)" that is bothering you so much with unknown credentials? Until you name one or two -- I cannot think of who you might even be referring to?
And really -- we are very interested in understanding how to detect the unbalanced freelists you "see all of the time"?
Predictive modeling with AWR data
Donald K. Burleson, March 04, 2005 - 6:41 pm UTC
Anjo Lolk wrote "So I don't think that the current statistics are really suited to build the predictive model that Don was suggesting. We may need a complete new approach looking at the LIO type to see what was really done by Oracle ....
</may be off topic>
Hi Anjo,
Thanks for your thoughts! Sure, a 100% reliable predictor would never be possible, for the reason you noted, but prediction works in the realm of probabilities, not absolutes.
One of the points that I was trying to make is that statistically-significant predictors dont have to make-sense to be proven reliable by empirical trials.
In the book Unobtrusive Measures by Eugene Webb, we see examples, as we do in the MMPI studies, whose results are accepted in US courts:
</code>
http://www.dba-oracle.com/t_software_regression_modeling.htm <code>
In sum, I believe that there is enough data in AWR to show enough about how a table/index is accessed, and enough data in the data dictionary about the state of an object to make a statistically valid predictive model.
Remember, the validity of the model can be tested and re-tested with real-world data, and the decision rules can be refined to continuously improve the model.
These models can never 100% spot-on, but a tool with a 70% would be very useful to me . . .
Thanks for the feedback, Anjo . .
Test Case!
Donald K. Burleson, March 04, 2005 - 6:57 pm UTC
AJ wrote >> "Can we get a test case or two that settle this unbalanced freelist issue for once and for all?"
Hi AJ:
This only happens under heavy load, and it is well-documented inside an "Internal Distribution Only" document that was accidentally sent to me by Oracle Technical Support. Im kind-of frustrated because only Oracle employees can get access to these notes, and Tom says he cant see them either.
Anyway, AJ, Im sure that the issue is reproducible, if you have 200 concurrent users and a 16-way SMP box.
Sigh - I wish I'd just added "groups", and we could move on. . .
Anyway, please be patient. Im working on it . . .
************************************
By the way, Im sure that this will not be a popular opinion in this forum, but I wish people would stop thinking that these single-users test-cases are valid proofs for performance issues. They arent, and relying on them can be very dangerous.
It is well understood that performing Oracle database testing in a small environment can lead to disaster. Oracle performs far different under extreme loads than they do with only a few users, and the vast majority of production-related performance problems are not discovered until the system is live in production.
This myth of scalable proofs is especially dangerous as well-known Oracle experts provide one-on-one, single-user examples that are often invalid in the real-world.
Im working on an article on this myth right now, and Ill send you a link when its completed. . .
Flame on . . . .
Also, please be patient if I don't answer right away. I'm in a country with bad intenet connectivity.
March 04, 2005 - 8:06 pm UTC
it is not true, it has not been true, it would freeze many an Oracle database cold if it were true (due to space just not being reused -- NOT for performance which was your original claim, but you keep trying to move the cheese)
Period.
And if you add "groups", I can demonstrate on a measely single user system in a trivial fashion -- it is a design "feature" of freelist groups, It has NOTHING to do with performance (which is what you were saying in the original precepts). It is not the first time you've said it (refer to your silver bullets article which has nary a single silver bullet it it -- but rather a series of "we tried to get inputs, diagnose a cause and solve it", had there been a silver bullet, a trained monkey could have solved the problems now couldn't it)
Unbalanced freelists (which do not happen) would not be a performance issue Don.
And if you SEE something "all of the time", you MUST have SOME WAY to recognize it -- to be able to look and say "ah hah-- there be the signature of them there unbalanced freelists" -- so, what is that signature? That is what I want to see -- what is that signature.
References to a mystical magical "internal only document that Oracle is hiding and would prove my innocence" -- geez.
"it is well understood", laughing totally outloud.
Let us hear how you see this all of the TIME. Please -- that is all, you don't have to prove they exist, just tell us -- just tell us "how do you see them". That is easier than a test case to reproduce the issue, just tell us how you recognize it (you don't just assume it do you? you must have some way to say "oh, i've see that - it is unbalanced freelists, let me show you").
That's all -- just tell us how you see them all of the time, there must be some method (else I cannot see how you see them)
Because, if you cannot share how you see it, and you rebuild, and you claim that anything positive is due to it -- well, I wouldn't know what to say. Chin hits floor, mouth open. Using that technique, I could prove that because it is raining on tuesday and you left the window to your car open and your friend sneezed -- that a rebuild is necessary.
So, what method do you use to see it all of the time?
And -- please -- tie in dba_free_space with all of this, your nice story above holds water about as well as a sieve. You had three gig of free space (by magic) but weren't able to use it. Curious, isn't it?
Hey, I've an idea -- can people with 16 way boxes and more than 200 users post their experiences with unbalanced freelists and the performance impact they've had? We must have one or two people with a configuration at least that large.
I too eagerly await that article on myths (as do others below). It'll be fun. I hope you use real facts -- not "I've seen it a billion times", that'll make it much more bullet proof.
funny, it is easy to say "yeah, but that doesn't work in the real world, I've seen it not work".
It is harder to say "and I know why it didn't work, let me tell you"
Otherwise, you have no idea what happened, you are seeing effects and making up causes. (whoops, getting too close to that darn science stuff again, oh well -- couldn't resist)
Credientials
Donald K. Burleson, March 04, 2005 - 7:11 pm UTC
Tom wrote >> "come on name names, give me the names of the people whose credentials you are so desperately in need of that are offering "expert" advice that you do not feel you have. You must have at least one or two -- seems to be some sort of burr under your saddle -- you keep raising this point."
Desperately in need?
No desperation, just my BS radar. Like everyone on the web, I must evaluate peoples credibility, especially if Im thinking about relying on their advice.
OK, for starters, how about the scientists of the Oak Table sharing their scientific credentials. I think everyone else would like to know, too:
</code>
http://www.dba-oracle.com/oracle_tips_credence_qualifications.htm
If the Oaken Fellows are scientists, I'd like to be the first to acknowledge their status, but many of then don't say anything about their credentials.
BTW, on another point Tom, doesnt Oracle already have primitive predictive models?
http://www.dba-oracle.com/oracle_tips_predictive_modeling_10g.htm <code>
March 04, 2005 - 8:15 pm UTC
Which scientists of the oak table have published material on the web or anywhere in an expert sense that you need the credentials thereof?
see
</code>
http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.jlcomp.demon.co.uk%2Fscientist.html&p_cat=SCIENTIST_2005&p_company=10 for an interesting article on scientists. Turns out Don your article on credentials was very useful, your own readers did one of my tricks and used a dictionary. That indeed was useful.
But back to the names -- the ones offering "expert" advice that you cannot find resumes for. They are the only "harmful" ones right -- which ones cannot you find? Many of them have written books -- that seems to put them in a category that is cool with you (publish and you are OK right -- your 'credentials' page says so)
But come on - a name, one to start, we'll tackle them one by one
And yes, desperate -- you wrote the credentials page, you asked more than once here. You must have at least ONE person by name?
I've never said good predictive models cannot be built Don.
I've said your precepts don't hold water. Period.
And a model has to be built on premises that can actually happen. "People taking showers or baths" -- sure that can be used perhaps, because it actually happens.
Unbalanced freelists being a performance issue -- nope, not true (even if it could happen)
Migrated/Chained Rows with full table scans -- nope, not true
But tell you what, go ahead -- no one is stopping you. Build those facts in.
and speaking of credibility.... hmmm.
http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.jlcomp.demon.co.uk%2Freview_01.html&p_cat=DKB_REVIEW_2005&p_company=10 <code>
Maybe we should check some credentials.
What's in a name
Ajay, March 04, 2005 - 7:28 pm UTC
Looks like this AJ guy stole the words right out of my mouth, D.
If ignorance is bliss, Don must be exceptionally happy
Howard J. Rogers, March 04, 2005 - 9:01 pm UTC
"This only happens under heavy load".
Sure, Don, sure. Tell you what: create a two-instance RAC on your laptop. Create a table with two freelist groups. Log on as a user to one instance. Sit there allocating extent after extent after extent. Now log on to the other instance... and try and insert a row into a table so that it ends up *using one of the extents you've just allocated*.
Voila. Unbalanced freelist groups. And whilst my laptop is a tad on the heavy side, it's hardly the heavy load you had in mind, is it?
Two years ago, on C.d.o.s., you exposed your outrageous levels of complete ignorance about how indexes worked, and when called on it pathetically claimed "Oh, well, that's how it worked in 1995 when someone leaked me the source code".
Now I see you repeat the trick: your levels of ignorance on freelists and freelist groups is so monumental it's breathtaking, but someone 'inadvertently' slipped you an eyes-only document that proves your case?! What a complete load of horse manure!
There was no source code. There was no internal Oracle document. You make such claims up to make it seem to the unwary that you have special sources of knowledge that are closed to the rest of us. It's a classic Voodoo Tuning technique. </code>
http://www.dizwell.com/html/voodoo_tuning.html <code>
The facts are, sigh though you might, if you'd known what freelist GROUPS were, you'd have said freelist groups. But you didn't, so you didn't. Even now, you claim only heavy loads will reveal what you are wittering on about (despite having claimed you see it all the time), when I can demonstrate the issue on a laptop and just me typing boring old 'allocate extent' and 'insert' statements. It is painfully apparent that, even now, you just don't have a clue.
Rather than pen yet another barely-literate exercise in self-promotion, why don't you actually do some real research and learning for a bit? Start of by finding out how you size an SGA in 9i, will you?
Don wants resume fluff
Ryan, March 04, 2005 - 9:02 pm UTC
To be fair, alot of people throw out claims and accusations on the internet and do not know anything. Alot of people like to attack well known technical people to make themselves feel good and have no basis doing so. So Don's request for credentials is fair.
However, it seems like the only credentials he wants is resume fluff. Did you go to a big name school? Did you work for Oracle? How many years do you have in the business? Have you published in academic journals?
First off Don went to a state school for both his undergradate and graduate degree. If you look in his 'How to perform the Oracle interview' book he states that people who go to state schools should be relegated to being average members of the IT team.
Don also states that he is a Professor Emeritus at Webster University. This is not much of a University. It is an open door college that lets just about anyone in. When I lived in St. Louis they used to advertise on television. Do you think Harvard advertises on TV? Most state schools do not even do that.
Real graduate schools do not teach classes that Don taught. I am assuming Don taught Oracle related courses. That is not graduate level work. Graduate level work is Artificial Intelligence, Algorithms, Networking theory, etc... Database classes taught at the graduate level are all about theory. Oracle classes are training and are not graduate level work.
This is not to say that Don did not teach a good Oracle class. I'm sure he did. However, it's not real graduate level work. I am finishing up my first masters degree this spring and will start a second one next fall. I have considered doing a Ph.D. in Computer Science. I would never go to a university that taught Oracle DBA classes for graduate students. There is a difference between training and education.
Don also states that he is the author of over 30 books. I am 'assuming'(I could be wrong) that he considers each new version of one his books to count towards this number. I read his Oracle/Unix book. There are still things in their from version 7 in the version 9i book.I do not recall seeing a single thing about version 9i though the cover claims to be a new 9i version. The statspack chapter was basically a copy and paste of parts of his statspack book. I skimmed his Oracle Interview book. It does have some decent questions it. However, it tells the interviewer to look for resume fluff. Such where you went to school, etc...
Again this is not to say that if I needed a production DBA, Don couldn't do the job. Though I would never pay him the $500/hour he claims to charge. Its clear that by claiming to be one of the greatest DBAs in the world, he is able to radically increase his billable. So by telling people this enough times people believe it. Cary Milsap charges a fraction of this to come in and investigate your system for a week. Tough choice who I would choose to hire...
Also, what is so special about working for Oracle? Jonathan Lewis never worked there. I have interviewed a number of consultants from Oracle and in spite of their rates I was not always impressed. I have talked to people in Oracle Support who are not impressed with them either. I know Oracle has alot of good people(namely the owner of this site), but just because you worked there does not mean you have 'credentials'. Its just like companies who want Top 5 consulting experience. This does not mean your good. It just means you were billed out at a higher rate.
When Albert Einstein published his first article on the theory of relatively he was a clerk. Famous German scientists would visit him at work. I believe he dropped out of their equivalent of graduate school. Don would have disgarded what he had to say because he did not have a nice resume.
The real argument should be: Are they correct? I think Don has a case here. It looks like being a true Oracle scientist requires a very high level of skill. Few people really have this level of skill(and Don is not one of them). I have seen a number of people waste large amounts of time over-analyzing performance issues because they want to be a scientist. This does not mean we shouldn't study the science of oracle. It just means that most of us are not good enough to fully implement it. I am sure Tom is and so are most of the people defending the process.
RE: Ryan's post - well said!
Bill Schwartz, March 04, 2005 - 10:03 pm UTC
I could not help but check in from home to see how this thread has progressed. Ryan, you said most eloquently what I have been trying to say to Mr. Burleson. It pains me that I failed, but I am so glad that someone was able to put it in a more understandable light. I have worked with (and for) some truly skilled folks, and some truly abismal ones as well. It never had a thing to do with JUST what school they attended or what degree they had earned. Some of the most skilled I/T people I have worked with had nothing more than Associate's degrees (or Bachelors degrees in the "softer" subjects). But you could always tell how good they were by what they contributed to a discussion. Being as this is somewhat off-topic I won't take this point any further except to say thanks to Ryan, Tom, Jonathan, and Howard for insuring that newcomers (like me) don't get caught up in the glitz of high-priced experts who may be doing more harm than good by propogating....well, propoganda with no substance. Regards to all who have posted here, and I eagerly await Don's reply to Tom's questions (although I won't hold my breath).
Okay - I lied, I must add one more request here
Bill Schwartz, March 04, 2005 - 10:20 pm UTC
Mr. Burleson, after re-reading your last few posts I'd like to propose the following: please fax to Tom the "Internal Only" document you received by accident. Since he not only works for Oracle but is a V.P. as well there should be no problem with his seeing it. He can verify for the rest of us what the document contains - I trust him implicitly to tell us the truth, the whole truth, and nothing but the truth (so help him Don). Tom's credentials are as always above reproach here.
Okay, NOW I'll sit back and wait.
unbalanced freelists
A reader, March 04, 2005 - 11:47 pm UTC
If this unbalanced freelist thing only happens on 16 way SMP servers, I won't worry about it.
March 05, 2005 - 6:56 am UTC
well, supposedly it only happens under "high load conditions" too, but then again:
<quote>
Heres what happened:
- Really big table (25 gig), 20-40 concurrent updates at any time
- Running out of disk space, management wont buy more
- On Sunday a single job purged out 3 gig of rows
- On Monday, table immediately began extending, costing the company a fortune in
unplanned downtime
- The DBA is punished
</quote>
seems like that dreaded single threaded purge was running during "off peak" doesn't it?
Unbalanced freelists
Donald K. Burleson, March 05, 2005 - 2:00 am UTC
A reader wrote "If this unbalanced freelist thing only happens on 16 way SMP servers, I won't worry about it."
Agreed, but it happens in other cases too:
1 - RAC systems with freelist groups
2 - When people use "Freelist groups" (instead of freelists) in a non-RAC environment:
But of course, don't take my word for it:
************************************************
</code>
http://www.dba-oracle.com/oracle_tips_freelists.htm <code>
From Melissa Holman on Metalink - 12/19/01:
"Free list groups can have a positive impact in an exclusive environment, they're not just for OPS. Free List groups will reduce contention on the segment header by having concurrent processes attempting to insert assigned to separate free list group blocks to look for blocks eligible for insert. The algorithm in a parallel environment uses the instance/thread number to assign an instance to a free list group. In an exclusive environment, the process number is used to assign a process to a free list group. The primary cost with using multiple process free lists or multiple free list groups is increased space usage, but in a highly concurrent insert environment, the performance gains will far outweigh the potential downside of a little extra space being used."
From Steve Adams:
Yes, freelist groups work almost as well as freelists for avoiding contention on data blocks, and of course will also help to reduce segment header block contention. However, so far as avoiding data block contention is concerned, freelist groups should not be part of your strategy in a non-RAC environment. They do not work as well as freelists, and are more expensive.
**********************************
Tom wrote (about the script): (do you really use this? i mean, you join dba_tables to dba_segments by table_name to segment_name?
Yes, I used it at the direction of Oracle Technical Support. I did not write it, I just used it to help solve the problem I've so tirelessly described.
Just check the iTAR's with my name in them. The script is right there, along with the unbalanced freelist issue.
Tom Kyte, you should be ashamed of yourself for misleading your readers, just to get a jab-in at me.
If you think that all of this subterfuge is going to make the subject of predictive table reorganization disappear, you might be mistaken.
Beat me up, insult me, but it does not change the fact that unbalanced freelists happens and these tables will benefit from reorganization.
March 05, 2005 - 7:13 am UTC
Gee Don, we never would have known that -- oh wait, yes we did, we explicitly stated it over and over and over again.
<quote flashback to the very initial response>
The author may have meant freelist groups - whereby this "out of balance" could
happen but I fail to see how that would affect the performance of an update (as
most updates don't really ever require a free block, unless the row migrates but
then we are back to number 3 above again, aren't we). So, assuming they really
meant "freelist groups" and "inserts" -- I again fail to see how this would
affect the runtime performance of the insertion? It could affect space
utilization, but not performance.
</quote>
But don, you kept saying freelists over and over
your silver bullet paper, says freelists
You said freelists, not freelist groups. You KEPT saying it, over and over. You persisted in saying it. You defended it. You say you see it "all of the time".
That script -- which I would hope a world reknowned DBA would have looked at and said "un-oh, there is a problem" -- doesn't identify the issue you say you had
Period.
Not even a tiny bit.
Those would not be unbalanced freelists (there, you said it again), that would simply print out the number of blocks on freelist GROUPS 1 and 2 (what about the other 48 Don? )
Where have I insulted you Don? Please -- point it out, quote it, show me.
My gosh, misleading my readers -- hah, oh my. Get real. (is that insulting? I'm not sure)
If pointing out that something is wrong is insulting to you -- so be it, I cannot help you in that sense. I tried privately one year ago, and got challenged to a public duel to an immensely widened email distribution list.
I can only help by trying to make sure the wrong stuff is pointed out. Tell me don, where have I mislead anyone, did you actually *read this* page?
Tell me don, you have 3 gig in dba_free_space, you have unbalanced free lists, you did a 3gig purge -- I mean, really -- look at your story, it makes no sense.
What does dba_free_space have to do with freelists?
What the HECK does this quote of yours mean exactly?
<quote>
Yes, that is exactly what I described, when (sic) dba_free+dspace shows free (sic) blovbks,
yet the table extends wildly.
</quote>
(I've always wanted to use (sic) before, did I use it right?)
What relationship between dba_free_space and a table extending wildly is there Don? What is the meaning of that statement? Please describe even briefly the relationship between a pool of free extents not associcated with any segment yet and .... a table extending wildly?
What was the context of "dictionary managed tablespaces only", what is the relationship extactly between extent allocation and freelist or freelist groups?
If you had unbalanced freelists, how did you know? as pointed out that script doesn't do it.
What exactly are bitmap freelists? Is this a new feature coming out in 10gr2?
Why wouldn't a DBA be able to figure out how to get the space in dba_free_space to be used?
if you purged 3gig and that is what you needed for some period of time, why wasn't the 3gig in dba_free_space good enough? how did this become an emergency? So, you had 3gig in the segment that only 1/50th of the users could ues -- but the other 49/50th's of the users should have been able to pull from dba_free_space
But I digress, it is a funny story -- but nothing adds up, nothing.
How about this Don:
<quote>
</code>
http://www.dbazine.com/burleson20.shtml <code>
....
alter table customer_order freelists 5;
alter index cust_pk freelists 5;
I knew that this was only a stop-gap fix and as soon as they ran their weekly purge (a single process) that only one of the five freelists would get the released blocks, causing the table to extend unnecessarily.
</quote>
gosh, I thought it took a honking big machine under massive duress in order to do that?
Now, who is misleading who? No mention of groups there, I would love to see how you change that story to be freelist groups (well, without actually recreating the table that is)
Fact is freelist groups are rarer than rare in the wild. They are used in OPS and RAC. Many people don't even know about freelists (no group). And the fact that freelist groups cannot be altered in makes they very rare indeed.
So, how do you see this "all of the time" again?
As Jonathan pointed out, using the way back machine we can flashback to about 1995/96 and find a bug 359219 in version 7.3.2 -- a bug about unbalanced freelists.
Found and fixed.
Long long long ago. But to still be talking about it as "fact, something you see all of the time" in 2005 is somewhat misleading to the readers here.
EOF - End of Fur-ball
Don Burleson, March 05, 2005 - 2:04 am UTC
Tom wrote I've never said good predictive models cannot be built Don.
No, but despite REPEATED DIRECT requests and the fact that ITS THE TITLE OF THREAD, you refuse to give your forum members even a hint of how you would do it.
Like Robert Freeman said hundreds of lines ago, this is just a fur-ball, a bunch of side-show mudslinging, and sanctioned insults. Your behavior speaks for itself.
March 05, 2005 - 7:19 am UTC
Don,
I answer to people with questions.
I don't respond just for you and you are the one requesting it. I don't care how many times you tried to move the cheese, how many times you tried to say "oh, ignore that man behind the curtain, look over here"
I'll give you the same answer I give others "Please check out the review page, follow ups have to be in the context of the original discussion or I just might ignore them -- this was a discussion about how the precepts were not a good starting point, nothing more and nothing less. When I'm taking new questions please feel free to follow with this new request. If it relates to the original discussion, I'll gladly follow up"
You haven't answered my requests either, so be it. Come on -- one silly little name? One metric to see unbalanced freelists (not groups -- we know groups, have written groups, groups yes -- but freelists -- no)
As for the title and all, contrary to what you believe, I do not edit what others put here, I do not selectively remove posts (if you have proof otherwise, please, pony it up). The only posts removed here so far have been duly noted (and this thread is unique in that respect, truly unique).
This would be funny if it weren't so sad.
Yes Don -- I do want to go there -- where have I insulted you?
is don gone?
the curious, March 05, 2005 - 4:36 am UTC
i guess tom could have *PROVEN* that joining dba_tables to dba_segments by table_name to segment_name is not a good idea (and technical support is sometimes wrong too ;o).
Don - please be clear
Jon, March 05, 2005 - 8:15 am UTC
Don,
I'm not sure if it is deliberate or not, but could you please stop being vague. What, of the following, is your last post saying. It is not clear:
1) Freelist = Freelist Group
2) Freelists can get unbalanced
3) Freelist Groups can get unbalanced
I'm genuninely curious to know which of the above statements you actually believe to be true.
With regards to your thoughts on a person's academic qualifications, how are you in a position to judge whether my post-grad studies at a university in New Zealand (where I'm from) are comparable, or otherwise, to studies done in Sydney, Kuala Lumpar, Bangalore, Oxford or Boston?
Doesn't that strike you as somewhat presumptious?
Ironically, rather than showing intellectual rigour, using such crude social heuristics shows one to be shallow and elitist.
The most important thing I have learnt from Tom via this site is not technical, but rather a different way of thinking about Oracle. And as Jonathan Lewis' article about true scientist mention, that is something anyone can learn... if they are willing.
BTW, we have 30,000 users on 2x12 CPU (RAC). We have never had any problems with freelists or freelist groups... but that may have had something to do with migrating to ASSM before we went to RAC :)
David Aldridge, March 05, 2005 - 9:42 am UTC
>> (I've always wanted to use (sic) before, did I use it right?)
:D
thanks Jon
Reader, March 05, 2005 - 10:20 am UTC
you said it best....
My degree may not be from the 'best' of schools(Yale,Cambridge,Princeton et al) however it was more than enough to get me where I am today.
Equally important as academics is the will and desire to achieve and accomplish,to pursue knowledge and intellect,to demonstrate resourcefulness,dedication,initiative,curiosity,open-mindeness,and,lastly,the humility to seek and learn from the best and the brightest.
As an IT Mgr I would rather hire a high school graduate, with all the above traits, than a Ph.D content to sit on their laurels and pompously express their all encompassing knowledge.
March 05, 2005 - 1:42 pm UTC
One of the smartest guys that I work with, he didn't quite do the college thing. It would have been a shame to pass him over -- given his years of experience, qualifications, level of intelligence. Glad I didn't really care if he went to college -- only cared that he could do the job better than anyone else.
Good points Bill Schwartz
Mike Thomas, March 05, 2005 - 10:43 am UTC
Been there, did that,...
"Is it his West Point discipline, his MS in software engineering, or his training as an Army ranger?"
So what?
I don't think about those things when I'm reading an article or book about Oracle.
By comparison, these attributes do not begin to outweigh another person's business experience nor Oracle skills. I do not think any individual's uniquely developed business skills apply to an exclusive group with the above attributes. I believe persistence, approach to learning, and willingness to improve (read: accept criticism, acknowledge and change self) are keys to success.
Nor do I think the quoted attributes make a great contribution to a foundation with Oracle. Rather, its a risk when a person's credentials (ego) hamper communication and teamwork. Open discussion of technical issues is one method to avoid egos and learn. Long live the forums. :-)
Sorry for spamming your forum, Tom.
Re: is don gone
Ajay, March 05, 2005 - 11:05 am UTC
the curious wrote:
"i guess tom could have *PROVEN* that joining dba_tables to dba_segments by
table_name to segment_name is not a good idea (and technical support is
sometimes wrong too ;o)."
How do you *PROVE* a data model? Would an Oracle data dictionary poster help?
As far as tech support being wrong - why is that such an epiphany? Do you believe proof-reading is a bad idea?
Jay
A reader, March 05, 2005 - 11:19 am UTC
All,
Please stop prompting Don to go off the topic.
Don please address this rather than going off track.
How do you recognize unbalanced freelists -- what is the signature? That is what we all are interested to see
and how does it impact performance.
Thanks,
Jay.
Predictive stuff
Bob B, March 05, 2005 - 12:52 pm UTC
"The goal is to create a reliable predictive model that will suggest tables and indexes which will measurably benefit from reorganization."
I'd like to dug into the ironies of the objective. Maybe I'm wrong, but when I hear "predictive" I think seeing into the future. This model would have to tell us at time t1 that at time t2 the index or table will benefit from reorganization. I think what Don is looking for (please correct me if I'm wrong) is a reactive model. At time t2, he wants to use the information from t1 and say, "Hey, based on the way the data was used and the way the table/index has changed, I think we should rebuild this table/index so it will remain as performant at t2 as it was at t1." The only predictive part of this is the assumption that the table index will be used in the same way.
So we can react to changes and manually (or automatically) "return" the system to the way it was OR we can predict that things will be used in the same way and tune the process. To half-reference another thread on this site, "Hey, it looks like we load each days data in , but the queries against the data are always a range scan of days against a single ticker. If the index (ticker, date) is not performant enough, maybe we should use a cluster on ticker." *That* is truly predictive.
I have a pre-cursor challenge to the above objective. Please, someone prove that a table or index rebuild is *PREDICTIVE* and not *REACTIVE*.
The only way I can see it being predictive is if the object in question is used in completely different ways at different days/times. For instance, if the first week of the month is all inserts, the middle two weeks are updates/deletes, and the end of the month is all selects. Although the table and/or indexes may get out of whack after the first week, we could predict that the users will benefit from a table/index rebuild before the third week (this seems like it would be more common in a DW than an OLTP system).
Even in the above situation, there is probably a better solution. A good pctfree can help ensure that there are minimal migrated rows. A good partitioning scheme may help both the data changes at the beginning of the month and the querying at the end of the month.
The reorg situation above is also mostly reactive. The predictive part about it is that we are predicting at this moment, that on the third week of each month, a reorg will help the upcoming selects. The annoying part is we have to keep doing likely unnecessary work each and every month.
I am in no way saying that index/table rebuilds are useless. I am saying that their usefulness is best in a reactive manner, reacting to an event that will not happen regularly, and to move the index/table to a more stable state.
And, for the sake of reiteration:
I have a pre-cursor challenge to the above objective. Please, someone prove that a table or index rebuild is *PREDICTIVE* and not *REACTIVE*.
Feel free to correct what I wrote above, but please keep it in context of the challenge.
Is this AskTomHanks.com?
Alex, March 05, 2005 - 2:03 pm UTC
What a great thread. I'd just like to point out a few enormous holes in Mr. Burleson's logic.
He suggested somewhere above Tom is an anonymous DBA? Is that suppose to be funny? The name of the site is ASK TOM.
Since it is very apparent he cannot disprove Tom and has no legitimate rebuttal he attacks credentials. Hmmm ok,
-The most successful guy ever dropped out of college (Bill Gates)
- An Ivy league grad who became president also happens to be a colossal moron.
So what does that prove having a fancy degree?
Also, Jonathan Lewis one of the guys on his respected list. Mr. Lewis has been disproving Don from the get go.
Mr. Burleson keeps referring to being insulted here by Tom.
We can all read the thread, see the text. It's not there, it just isn't.
How am I suppose to take technical advice from this guy?
To Err is DON and to Forgive (Correct) Tom Kyte
A reader, March 05, 2005 - 10:05 pm UTC
Its human nature.We face such guys everywhere in our Profession.
When you cant give Technical Proofs to your points or to refute others points. You Rebuttal thru non-technical means.
Then the EGO war starts.
The ego is that ugly little troll that lives underneath the bridge between your mind and your heart.
WHERE IS THE PROOF ? Thats all DON has to produce.Nothing more nothing less and we all can live happily everafter. :)
Parallel or sliced FTS.
Sai, March 06, 2005 - 6:47 am UTC
Tom,
You said full table scan(FTS) of a table with migrated rows wouldn't introduce more LIO's than if this operation was done after the table was re-organized because Oracle knows it would eventually have it to get there anyway. That sounds like a good thing to do, but here is the question:
1) How does Oracle parallelise FTS, does it visit the actual block having the row data when it hit a migrated row, or let another query slave go for it?
2) We would like to slice the table data by rowid ranges and simulate the parallel FTS, we had to do this for vareity of reasons like LONG datatype...over db_link...etc. Does this simulated parallel FTS read the migrated rows twice, i.e., one from the rowid range having the rowid pointer to the actual row, and another from the actual row?
Thanks,
Sai.
March 06, 2005 - 7:33 am UTC
1) it breaks the table up by "range" (slides of the table -- from rowid A to rowid B in effect).
If I have a range that has a head rowid piece for a migrated row, I know I can ignore it since either:
a) the tail of it is in my range and I'll read it in a bit
b) the tail of it is in your range and you'll read it in a bit.
2) that would be forced to follow the migrated row in that case. This is rather like an "index range scan", since we are in fact reading by rowid (like an index). But without the overhead of having to read the index structure to find the rowids.
Rules of the game
DaPi, March 06, 2005 - 8:49 am UTC
In answer to Bob B and the question "what is an Oracle Scientist?" I'd like to add my 2¢:
A model of reorganisation could look something like this:
If the pattern of use of our real-world database is unchanged (ASSUMPTION) then:
1) if we do nothing, then end-user response time will increase by 5% per month (PREDICTION).
2) if we reorganise objects A, B & C, then end-user response time will increase by 5% per month (PREDICTION).
3) if we reorganise objects P, Q & R on alternate Thursdays when there's an R in the month, then end-user response time will not increase (PREDICTION).
4) if we reorganise objects P, Q & R on alternate Thursdays when there's an R in the month, then we will make a contribution to world peace (PREDICTION).
As Oracle Scientists the ASSUMPTION is all we need state - as Oracle DBA's we'd better talk to the Business Analysts to see if it's reasonable.
PREDICTIONS 1, 2 & 3 are *scientific* in the sense of Karl Popper (</code>
http://plato.stanford.edu/entries/popper/ <code>
see the section "The Problem of Demarcation"): they can be tested and can be falsified. (Popper is rather pessimistic, a theory/model can only be disproved. It can't be proved by experiment, only corroborated.)
PREDICTION 4 is probably not *scientific* - I can't see how it's measurable or testable, so I can't see how it can be falsified.
So any theory/model worth its salt is *predictive*.
Now our reorganisation model (parts 1,2 & 3) can be criticised, but not for being unscientific:
- Philosophical criticism: (a) its not based on anything rational, (b) it doesnt add anything new . . .
- Business criticism: it hasnt a good cost/benefit ratio . . . .
- DBA criticism: it hasnt a good risk/benefit ratio, its not sustainable .
- Practical criticism: its damn all use in the real word
The test of the model is does it work? i.e. does it resist falsification? We cant complain if it is based on the state of chicken entrails, the DBAs bath/shower ratio, a freelist statistic, hard-parse ratio etc etc IF IT WORKS. The Philosophical criticism (a) can not be maintained we can not object scientifically to the chicken entrails. If a similar model without the chicken entrails works just as well, then they will fall to Occams Razor.
The basis of our model may be that we have observed this behaviour over the last two years I think this is what Bob B would call *reactive*. This could be called a trivial model and is subject to Philosophical criticism (b).
To return to Dons challenge:
- To build a predictive model is a scientific endeavour.
- Some of his *precepts* are good scientific models: we can measure the time to read 100 and 500 blocks
- Some of his *precepts* dont seem to be good scientific models: if we dont yet know what is meant by unbalanced freelists well I dont :) - we cant test the model.
- All of these models are open to criticism on Business, DBA and Practical grounds
The problem arises that a good scientific model can be useless on Practical grounds the time to read 500 blocks is ultimately irrelevant. Its (e.g.) end-user response time that counts. Complexity is the problem
On this site Toms examples on a single-user, two-table system are perfect for his didactic purposes but he often warns about side effects in the real world and has even been known to suggest *testing* :)
The 1000-table, 20-developer, 1000-user system is closer to a biological system than pure physics - see side effects above just like medicine. In such a situation, I believe no one person can have all the relevant factors in mind at the same time. I think in such a situation our models must become probabilistic e.g.:
If the pattern of use of our real-world database is unchanged (ASSUMPTION) then:
3) if we reorganise objects P, Q & R on alternate Thursdays when there's an R in the month, then in 95% of cases end-user response time will not increase, in 5% of cases it will go to hell (increase by 100%) (PREDICTION).
Such a model is still scientific even if it is much harder to test. And by overcoming the Practical objection, we also allow the Business & DBA criticisms to be evaluated in a realistic way. (If anyone doesnt like a probabilistic model, then they had better not rely on Quantum theory, transistors, lasers etc etc).
I suppose the ultimate model will make probabilistic predictions about the companys bottom line . . .
That is my view of the rules of the game, having been (some long time ago) a practising physicist . . . degrees and things etc etc
That was more than 2¢!
DaPi, great stuff here!
Mike Ault, March 06, 2005 - 10:42 am UTC
DaPi, thanks, this is good material. We need to refocus on the question instead of getting bogged down in bashing. Unbalanced freelists are just one of the items mentioned, yet we have wasted an inordinate amount of time arguing about it. How about getting back to the question about developing a model whereby we can determine if an index needs to be rebuilt? Note: I am lumping both coalesce and rebuild into the term rebuilt for brevity sake. The index questions worth answering are:
1. How to determine if an index is a right handed index and would benefit from a coalesce?
2. A good determination of index sparseness and when it is detrimental to performance?
3. A determination of when a bitmap index should be rebuilt?
As to whether LIO or PIO is more costly, I have to go with what delivers the goods to the user the fastest. As long as the CPU is not a bottleneck I feel OK with using LIO in place of PIO. As Tom has stated "We can tune waits away".
As to table rebuilds. We as DBAs and certainly those of us who are also developers can't always predict how a table column will be used as far as IUD (Insert, Update and Delete) from the initial design of an application. This leads to tables who have less than optimal characteristics. If we don't set some of the block level stuff correctly, the table will migrate rows leading to increasing PIO characteristics which are bad for performance. Also. careful consideration needs to be placed on the use of freelists and freelist groups. Given the above, the following questions are asked:
1. Can we develop a process to determine what tables need to be rebuilt?
2. Is there a method to properly determine internal settings for block storage, freelists and freelist groups?
I have my methods but am interested in seeing others thoughts and ideas. I look at the block wait determination factors (initrans, freelists, freelist groups, row migration stats, pctfree) to determine if a table needs to be rebuilt. In a majority of cases no one has changed these from the defaults. However, determining the proper settings for pctfree when there are multiple columns that may or may not be updated can be problematic. Determining the proper number of freelists and freelist groups can also be problematic. Many times DBAs only have a general idea of how many transactions concurrently access a table for IUD.
If the list doesn't refocus on the issues instead of getting lost in the byways...I am out of here.
Oh, as to anonymous posters, I feel they are cowards. I speak my mind, put my opinions out there and proudly sign my name to them. If they are later proven to be incorrect, fine, that is what these forums are for. If you can't even claim what you say as yours, maybe it wasn't worth saying in the first place.
David Aldridge, March 06, 2005 - 2:11 pm UTC
Mike,
I'm very unhappy about the idea of lumping together "index rebuild" and "index coalesce" -- as the documentation makes clear these are two very different operations ... </code>
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10739/indexes.htm#sthref2130 <code>... Too often folks are casual about terminology and will use terms as if they are interchangeable, but here they very clearly are not.
I have the same problem with references to "table rebuilds" ... there is no such operation that I recognise, and that means that to use this term is to be imprecise, to confuse the inexperienced, and leave yourself wide open to criticism from the more experienced (unless you use it as a shorthand for a predefined process ... eg "i. Create new table based on old table, ordering rows on insert, ii. Apply indexes, grants, triggers ... henceforthg i shall refer to this particular process as a 'table rebuild'"). Maybe in some cases you are implying that a table rebuild is a one off process for changing block size, or maybe for changing the table to be hash clustered, or for de-migrating rows and changing the storage clause, or for partitioning the table, but long-story-short one circumstance may warrant one definition of a "table rebuild" where another warrants a different one and I'd rather not be guessing ;)
End of this discussion
Thomas Kyte, March 07, 2005 - 2:13 pm UTC
Due to the overwhelming number of posts and the vigor of the discussion -- I've decided to "close" this post as it stands.
If you click the link below to add a followup, it will fail when you go to save it (trigger will prevent it).
So, thanks everyone for their inputs, but we'll call this one "done" for now.
A reader, May 15, 2008 - 4:26 pm UTC
3 years ago
Alain, August 01, 2008 - 4:14 am UTC
D. Burleson wrote :
Anyway, when I revisit this thread in two years, Ill venture my own prediction. By February 2007
these Oracle predictive models will have become mainstream, well-accepted, and commonplace.
So whay about Oracle predictive models ?
Sometimes...
David, August 27, 2008 - 12:58 pm UTC
Tom,
I worked at Oracle in the past, though not in databases, but, in tools, and, I understand your point regarding the over and over, as well as the marginality of the gains from such kind of maintenence, like the migrated rows issue, but, the fact is that too often I come across the type of customers that spend as little as possible in systems, but, demand the world of it.
I'm looking back to the begining of the post.
There is often a window for daily downtime (at night), so it's possible to have maintenece done over and over again...
And in many cases, the disk IO is a huge bottleneck. When the database controls the full production cycle of a plant, the peaks of output can cause surges of disk output well above the 4000 disk IOs, which is the standard peak output of a fiber optics disk array, and, with tight margins there is very little budget to jump to the next level of disk arrays.
From your results, it's possible to have some gain (around 30%) in disk IO, with very little (almost none) impact in inserts regarding disk IO, which is the imposed bottle neck. :-(
This 30% gain may be the diffrence between shipping out the goods in the 1h30m window, or having the customer complain about ruptures in stock.
Am I right to assume this gain in IO? Is the any better option to reduce the IO, to meet the demand in those peak hours?
Sometimes(again)...
David, August 27, 2008 - 1:04 pm UTC
Sorry, I got the figure wrong. It's not 30% I meant, but 50%.
From 15955 down to 7798...
August 29, 2008 - 12:57 pm UTC
the physical io's descreased because blocks where in the buffer cache, not because of any reorganization.
Ignore the physical IO's there - they are not relevant unless we restart the machine between queries
and then - then they would be just about the same.
and make sure to see the important point made right after the numbers:
I might
do it once, and fix the obvious mistake we made with PCTFREE on this table.
there is no reason to do it over and over again.
And things in Portugal must be very different - the last time I saw systems that "often" could come down daily is so far in the past, I cannot even remember. Especially if they were 'important' systems where the need to eek out every bit of performance was crucial.
Organisation of tablespaces
Serge Shmygelskyy, May 06, 2010 - 10:11 am UTC
Hi Tom,
I have a question which is related not to extents but to tablespaces. Still I beleive is also related to the thread.
I have a client who wants to put each big table in the DB (e.g. bigger that 1G in size) into a separate tablespace, which, in turn, has to be placed on a standalone filesystem. This will create about 100 tablespaces in the database (which, in turn, creates 100 mountpoints). Also each tablespace should consist of the smallest possible files (they don't want them to exceed 2G).
For me it is a way far from optimal. But their reasons for that are as follows:
1. If we have more than on table in a tablespace, in case of datafile corruption recovery we won't be able to access more than one table which is bad for them.
2. Bigger files are harder to restore as well which, again, in case of recovery will create some problems.
I beleive (after reading your site and according to my previous experience) that we should stick with another approach, e.g. create a few tablespaces - one for tiny tables, another one for small tables etc). Maybe some read-only tablespaces for old table partitions. This will also simplify UNIX stuff.
Could you advice on that, please?
May 06, 2010 - 2:41 pm UTC
1) the database has done block level recovery for many releases, today in the 21st century, we shouldn't even be taking an entire file offline for a recovery, we recover blocks.
and ask them for real numbers on the number of datafile corruptions they've have in the last 10 years that have required any sort of recovery?
2) you do not need to use big files, you can have a single tablespace with dozens of small files rather than one big one. Tablespaces never imposed any sort of limitations on you regarding that.
You should use tablespaces to make your life better and achieve realistic goals - goals set out on paper as requirements - not as implementation details.
Serge Shmygelskyy, May 06, 2010 - 4:01 pm UTC
Thannks for your quick response.
One more short question - what is the optimal/practical limit for a size of a datafile and number of datafiles in a tablespace, e.g. if I have one tablespace of 140G and another one of 500G - how many files should I have for each one?
Thanks in advance
May 06, 2010 - 9:02 pm UTC
"it depends"
what would you like to have to manage? what is your pain point? are large files something that would be cumbersome to you? are large files not a consideration? Are you using a good logical volume manager like ASM (builtin) - if so, a single file might be a great idea (a single file, but many devices).
it depends. there is no single, right, correct answer.
Serge Shmygelskyy, May 07, 2010 - 3:45 am UTC
No, there is no ASM. It is 'standard old' schema with SUN server and a storage wich is configured as RAID5 (which I also don't like). 2 controllers. Multipathing. Nothing actually special. I saw somewhere on your site that it's good to have files on the different devices attached to the separate controllers - in that case Oracle will create segments in different files in round-robbin fashion.
So I doubt if I should have one huge file for a tablespace of about 200G. And the client insists they should be as small as possible (to reasonable limit - for them it is 2G). But having 100 files of 2G is also something not really good for me (and this is only for one tablespace and they should be many)
May 07, 2010 - 8:29 am UTC
with such limited IO resources - it isn't going to make much of a difference what you do.
I didn't follow your conclusion "so i doubt if..."
when you say "so i doubt if...", I read "therefore, I doubt", meaning you seem to think that what you just wrote leads you to believe something.
You should use the number of files that makes sense for you in your situation, given your needs, wants, desires, requirements, etc.
You will not get a size from me like "they should be 3.14728gb in size". I will only confirm that you can use 100, 2gb files or one 200gb file - whichever floats your boat.
lots of smaller files: limits your time to recover if your unit of recovery is assumed to be a data file. increases potentially the time to read data as we have to open many more files and opening a file takes a long time.
few larger files: increases your time to recover if your unit of recovery is assumed to be a data file (which is a false assumption, unit of recovery should be a database block). increases contention on some operating systems as having many people read/write the same file at about the same time can lead to a bottleneck, varies by OS however.
pros and cons of both, both work.
No action nowadays in this thread :)
Anto, May 11, 2010 - 11:03 am UTC
No action nowadays in this thread :)