Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, mohammed.

Asked: March 31, 2008 - 3:16 pm UTC

Last updated: September 29, 2009 - 7:37 am UTC

Version: 9.2.8

Viewed 1000+ times

You Asked

Dear Mr.Kyte,

Greetings

Can you please give me some guidelines of how to tune these topics:

1)db file scattered read, db file sequential read.
2)hot data files reading (Hot blocks), physical read and logical read.
4)high CPU Time.
3) and most import the COB, i have some indexes on some of my columns but it seem that the COB see that a full table scan is butter that index scan, how can i resolve this issue, (ignoring the indexes), sometimes i think if he see that full table scan is better, so let it go.

please note that the indexes are on seperate tablespaces in sperate disks. my RAM is 2G i have 1G for oracle and a good free space on HP-UX with 2 CPUs.

Thanks.

and Tom said...

this will be tongue in cheek

1) reduce the amount of physical IO you need to perform

2) see #1

3) (1,2,4,3 interesting....) "COB"?? maybe you mean "CBO". In order to get around this one, follow these steps:

a) say out loud - full scans are not evil
b) say out loud - indexes are not all goodness
c) goto (a) until you actually believe what you are saying out loud.


4) use less cpu. perform less work.


Please understand - what could anyone say about such broad topic? How to tune db file scattered read/db file sequential read.... What more can you say beyond "make IO faster or do not perform the IO in the first place". How do you accomplish that? Oh, in about 500 different ways given different circumstances.

Rating

  (24 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

IO

mohammed Abuhamdieh, April 04, 2008 - 12:45 pm UTC

Greetings Mr.Kyte,
I have these tow reports. and i have some comments after.

report 1
Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 1 30-Mar-08 09:28:19 142 39.8
End Snap: 3 30-Mar-08 09:54:06 143 39.8
Elapsed: 25.78 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 32M Std Block Size: 8K
Shared Pool Size: 256M Log Buffer: 512K


Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------- ------------ ----------- --------
db file sequential read 4,245,432 4,772 54.46
CPU time 2,283 26.05
db file scattered read 171,620 1,348 15.38
log file parallel write 2,890 75 .85
buffer busy waits 2,040 60 .69
-----------------------------------------------------------

report 2

Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 56 31-Mar-08 09:26:29 128 40.5
End Snap: 57 31-Mar-08 09:33:51 130 39.6
Elapsed: 7.37 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 512M Std Block Size: 8K
Shared Pool Size: 304M Log Buffer: 512K



Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
--------------------------- ----------- -------- --------
db file sequential read 125,400 402 36.01
CPU time 387 34.69
db file scattered read 12,460 247 22.12
latch free 4,558 39 3.49
control file parallel write 143 12 1.11
--------------------------------------------------


I have a strategy to reduce and resolve the db file sequential read and db file scattered read, by setting the db_file_multiblock_read_count = 128, and buffer cache high depend on the RAM, my idea is get as much block as possible and get it to memory, and as i saw through the result from these tow reports, the statistics had a big defferenc, but the problem now is, CPU is having a high load.

My question is it a good strategy, Please note i dont have access to code.

and the db_file_multiblock_read_count = 128, i always put that value and the effect is always good, but as i read through the internet, i had a conclusion of, you should it leave it to default or between '4-64'.

So please give me some guide lines for this parameter. and how is setting this parameter to 128 will effect CPU.
Tom Kyte
April 04, 2008 - 1:14 pm UTC

... Please note i dont have access to code.
...

then please don't expect to make substantial gains :)


your cpu for the observed time when down. You ran for 25+ minutes and used 2283 cpu seconds. Later you ran for 7+ minutes and used 387 - since 7+ minutes is 28% of 25+ minutes, one would expect (if cpu was CONSTANT) for you to have used 652 cpu seconds.

So, now what.

(32m for a buffer cache is very small, 512m - assuming machine has memory - is a better starting point, the buffer cache advisor in that stats pack will tell you where your sweet spot might be)

let all other optimizer parameters like db file multiblock read count default.

CPU

mohammed Abuhamdieh, April 04, 2008 - 2:37 pm UTC

greeting tom,

how deos oracle reads (full table/index scans) effect the CPU.
Tom Kyte
April 04, 2008 - 10:15 pm UTC

when we read, we don't use the cpu, so it...

well, you know, I just don't know how to answer this.

when we do physical io, we typically (because the os typically) do not use the cpu.

when we do a read via logical io, it is all about cpu and the amount of cpu varies based on the amount of concurrent access.

sql

mohammed Abuhamdieh, April 05, 2008 - 12:52 pm UTC

greetings tom,

I have these tow sql statement whcih when i make an execution plan they have no cost, but in realty they have the following effects.

My question, when i make and execution plan to an sql statement and i have an excellent result, and then when i make a trace for a session and have this results, and when i try to increase the buffer cache my unix server CPU went so high.
so what is happening.

SELECT NVL(SUM(AMOUNT ),0)
FROM
PAC_RCT_VS_BILLS WHERE (BILL_NO,BILL_TYPE) IN (SELECT BILL_NO,BILL_TYPE FROM
PAC_BILLS WHERE BILL_TYPE = :A1 AND PATIENT_ID = :B2 AND ADMISSION_NO = :B1
AND STATUS = :C1 ) AND (COMPANY_ID,YEAR_CODE,RECEIPT_TYPE,RECEIPT_NO) IN
(SELECT COMPANY_ID,YEAR_CODE,RECEIPT_TYPE,RECEIPT_NO FROM PAC_RECEIPTS
WHERE CODE = :D1 AND STATUS = :C1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 22 0.00 0.02 0 0 0 0
Fetch 22 0.00 176.02 10773 1653373 98 22
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 45 0.00 176.04 10773 1653373 98 22

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 96 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
22 SORT AGGREGATE
22 TABLE ACCESS BY INDEX ROWID PAC_RCT_VS_BILLS
1074700 NESTED LOOPS
540210 MERGE JOIN CARTESIAN
22 TABLE ACCESS BY INDEX ROWID PAC_BILLS
22 INDEX RANGE SCAN PAC_BILLS_HMSIDX8 (object id 77858)
540210 BUFFER SORT
540210 SORT UNIQUE
540210 TABLE ACCESS BY INDEX ROWID PAC_RECEIPTS
540210 BITMAP CONVERSION TO ROWIDS
22 BITMAP AND
22 BITMAP CONVERSION FROM ROWIDS
542014 INDEX RANGE SCAN PAC_RECEIPTS_HMSIDX24 (object id 78026)
44 BITMAP CONVERSION FROM ROWIDS
7535968 INDEX RANGE SCAN PAC_RECEIPTS_IDX2 (object id 78028)
534468 INDEX RANGE SCAN PAC_RCT_VS_BILLS_HMSIDX80 (object id 87917)

***********************************************************

SELECT COST,CATEGORY_TYPE,LISTDTL_NO,SECTION_NO
FROM
CON_PRICELISTCURRENT_DTL WHERE ITEM_NO = :B5 AND SECTION_NO =:B4 AND LIST_NO
= :B3 AND PATIENT_TYPE IN(:B2 ,3) AND NVL(STAFF_ID,-1) = NVL(:B1 ,-1) AND
ROWNUM <2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 86 0.00 0.00 0 0 0 0
Execute 673 0.00 0.33 0 0 0 0
Fetch 673 0.00 148.11 0 114673 0 147
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1432 0.00 148.44 0 114673 0 147

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 96 (recursive depth: 2)
****************************************************

Tom Kyte
April 07, 2008 - 8:54 am UTC

I did not follow anything you wrote. sorry.

IO

mohammed Abuhamdieh, April 08, 2008 - 1:52 pm UTC

Greetins Tom,

i have Some points, please clear them to me:

1)If i had a big datafile (.dbf) with small data in it, My IO and oracle reading became so high, why, what is the relation?

2) what is best with oracle, Unix file system with cache setting off or with cache setting on? and what is the relation between the file system cache and the oracle parameter filesystemio_options, and what are other parameters that effect Unix file system cache off.

3)You have great books of Designing and architecting oracle database, but i wonder if you have some books of FIXING catastrophic Oracle Design, with no option of rebuilding.
Tom Kyte
April 09, 2008 - 2:40 pm UTC

1) eh?

2) "it depends"

3) hah, you mean "fantasy". Somethings cannot be fixed.

IO

Mohammed, April 10, 2008 - 3:08 am UTC

--eh?
what does that mean?

--"it depends"
ON What, give me an example?
Tom Kyte
April 10, 2008 - 10:44 am UTC

eh means - what? huh? no idea what you meant *at all*

you wrote:

... 1)If i had a big datafile (.dbf) with small data in it, My IO and oracle reading became so high,
why, what is the relation?
....


Ok, how big is big - and why is "big" even relevant in this context.

Ok, so what does "small data" mean, and then when we understand what "small data" is, how small is small.

Ok, 'My IO and oracle reading became so high' - so high as what?

as for 'what is the relation' - I don't even see two concepts to relate....




As for "it depends", see
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7413988573867

if you take a system using a buffered filesystem today and just "unbuffer" the file system (without INCREASING THE SIZE OF THE SGA TO OFFSET THE SUDDEN LOSS OF THE FILE SYSTEM CACHE), you would be led to believe that buffered file systems are 'faster'.

It depends on the circumstances and the configuration.


Delete

Parag J Patankar, April 10, 2008 - 8:26 am UTC

Hi Tom,

I am running following query in 9.2.0.8 database, which is quite expensive

delete from VDZ171
where
(((((((((((A0090M=:b1 and A9780=:b2) and A0110=:b3) and A9030=:b4) and A9010=
:b5) and A9110=:b6) and A00904=:b7) and A10104=:b8) and A60004=:b9) and
A02304=:b10) and A9000=:b11) and C81001=:b12)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.00 0.00 0 0 0 0
Execute 193351 112.38 423.35 69147 968198 4342542 193351
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 193357 112.38 423.35 69147 968198 4342542 193351

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 516 (OPS$A2DEVHK)

Rows Row Source Operation
------- ---------------------------------------------------
104615 DELETE
104615 INDEX UNIQUE SCAN X01DZ17 (object id 1250249)


Rows Execution Plan
------- ---------------------------------------------------
0 DELETE STATEMENT GOAL: CHOOSE
104615 DELETE OF 'TDZ17'
104615 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'X01DZ17' (UNIQUE)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 69145 1.36 289.19
SQL*Net message to client 193351 0.04 0.17
SQL*Net message from client 193351 0.44 58.34
log file switch completion 35 0.47 1.08
latch free 4 0.00 0.01

Pl show me how this query can be improved My sort size is 30m and db cache is 150m.

thanks & regards
parag
Tom Kyte
April 10, 2008 - 11:25 am UTC

that delete is very INEXPENSIVE

On average it consumes:

ops$tkyte%ORA10GR2> select 112.38/193357  avg_cpu, 423.35/193357  avg_elap from dual;

   AVG_CPU   AVG_ELAP
---------- ----------
.000581205 .002189473



ALMOST NO TIME

the problem is - 100% - in your faulty slow by slow logic. You are doing things procedurally, as slow as you can - not in bulk.

The only way - repeat - the ONLY WAY to make this "faster" (significantly faster) will be to alter your logic, use set processing - that is, USE SQL - stop using the database like it is a VSAM file...


IO

Mohammed, April 18, 2008 - 9:32 am UTC

greetings tom,

--Ok, how big is big - and why is "big" even relevant in this context.

--Ok, so what does "small data" mean, and then when we understand what "small data" is, how small is small.

--Ok, 'My IO and oracle reading became so high' - so high as what?

--as for 'what is the relation' - I don't even see two concepts to relate....

My Case was as follow:

I had a 6G data in tow datafiles each one size is 10G, autoextent off.
the Cost of phisycal read and cpu for Sql statements was high.

i create new tablespace with 4 small datafiles autoextent on.
the reading became much less, and the cpu cost disappeared.

that what i meant.
My question is why,

1)CPU Cost disappear for sql statements?
2)is there a relation of number of datafiles and reading?
3)is there is a relation of number of datafiles and CPU?
Tom Kyte
April 18, 2008 - 10:24 am UTC

no one could possibly answer your questions.

you don't say where the files WHERE and where they ENDED UP

you rebuild all of the segments, you massively reorganized everything.


likely if you used a single 10gb file in the same circumstances - you would have seen the same results. They almost certainly where because of the reorganization - and had nothing to do with the number of datafiles

unless of course, your two original files were on "diska" and diska is a single disk - no striping, slow. and the four new files were on diskb, diskc, diskd, diske - you striped the data.

but you know - without lots of architectural information - NO ONE can say.


but I'd guess "the reorg"

tell you what - put everything back in the original two files, you would expect to probably see the same relative performance you are seeing now with four files.

great response - typical 'knowledge'

kingy, April 18, 2008 - 12:26 pm UTC

this level of knowledge is very typical in our industry. it would worry me if people on the outside read it.

to us a 'tom analogy', can you imagine reading a discussion group of with a bunch of civil engineers talking like this about fixing a bridge that is need of repair?

tom, you answered these practically unanswerable questions in the most honest and frankest way possible.

i would suggest going back to the basics - the oracle server concepts manual. once you've read that, read it again. next read the developers guide, next read the performance tuning guide...just keep reading...read tom's books also! once you think you've read enough, come back here, and start trolling through performance related questions that tom has already answered. finally if you still have a question, ask it then.

good luck.

**Delete** from previous question

A reader, April 18, 2008 - 2:12 pm UTC

Tom, From Parag's "Delete" Questions above he showed a single Delete statement:

delete from VDZ171
where
(((((((((((A0090M=:b1 and A9780=:b2) and A0110=:b3) and A9030=:b4) and A9010=
:b5) and A9110=:b6) and A00904=:b7) and A10104=:b8) and A60004=:b9) and
A02304=:b10) and A9000=:b11) and C81001=:b12)

and you responded:
"the problem is - 100% - in your faulty slow by slow logic. You are doing things procedurally, as slow as you can - not in bulk.

The only way - repeat - the ONLY WAY to make this "faster" (significantly faster) will be to alter your logic, use set processing - that is, USE SQL - stop using the database like it is a VSAM file...

"

Where did you see that he's using it in PL/SQL and in a loop [slow logic]? I am surprised you found that out just with the Simple delete statement he mentioned..


Tom Kyte
April 18, 2008 - 3:01 pm UTC

do you see how many times that delete was executed.

Since I doubt he typed it in 193,351 times - there probably was a program somewhere that has a loop that iterated at least 193,351 times issuing that teeny tiny super fast delete.

seems like a very safe assumption by deduction - don't you think?

For Kingy from DC

A reader, April 18, 2008 - 2:17 pm UTC

I think what ever you've written in your response, Would have been better if it had come from Tom himself. I know Tom reiterated those things 100's of times but coming from you like that , the person who you are referring to, may not like it. Since Tom is trying to understand the problem and help him, we should not comment on his knowledge or whatever, as it's not our site.

just my 2 cents.
Thanks,

**Delete** from previous question

A reader, April 18, 2008 - 8:11 pm UTC

Yes, there it is..I didn't see TIMES in the statictics..

that explains it. I wonder why the person who posted did not give that information [??]

May be he's trying to test you ;-) [Sorry, just kidding]

IO

Mohammed, April 19, 2008 - 12:55 pm UTC

greetings tom

--you rebuild all of the segments, you massively reorganized everything

1)is it good to reorganize my segments frequently?
2)How frequently you think i can reorganize my segments?

and i Have found that my indexes are bigger than my tables,
My tables size is 6G, indexes size 10G, some of the tables have 20-30 indexes, 8-16 foreign keys, does the business force sometimes to have this design? or the software comapany's design is bad, and need to reconsider, no matter what is your business requirements?
Tom Kyte
April 19, 2008 - 3:29 pm UTC

1) NO, remember please - we are totally guessing as to what happened here. We know ALMOST NOTHING about what was in place before, what happened, what is now in place - NOTHING.

2) see #1


as for the last paragraph, again, we know NOTHING here, no more comments.

IO

Mohammed, April 20, 2008 - 2:43 am UTC

Greetings Tom,

--we know NOTHING here, no more comments

What kind of information You need.

The Database design still the same i just recreate the tabelspaces with more datafiles.
Tom Kyte
April 23, 2008 - 4:18 pm UTC

so, what happens if you recreate the tablespace with LESS datafiles

the number of datafiles likely *had nothing to do with it*

the fact you rebuilt all of the structures - maybe.
the fact that you moved the IO from disk A to disk B - maybe
the fact that you moved the IO from slow outside edge of disk to faster inside edge of disk - maybe

there are so many reasons - but you are stuck on "i did manyt files", which is likely "not at all relevant"

To: A reader

A reader, April 20, 2008 - 11:10 am UTC

This in response to "a reader's comment For Kingy from DC".

The questions from original poster are a total waste of Tom's time. Tom is too much of a gentle person to say what kingy from dc has said. I totally agree with kingy from dc. All of poster's questions have been answered in some form or shape on this site. The poster doesn't want to put effort into resolving his problem. He is expecting Tom to resolve his problem for him.

Tuning never begins with: how to tune db file scattered read... It starts with "I have a database performance problem which is impacting my business..." This situation is like going to a doctor with an X-ray, pointing to some shadows, asking him to identify what is wrong and fix the problem. The fist question the doctor is going to ask is "what are we seeing you for...?"

Fair play to a newbie

John Hawksworth, April 20, 2008 - 6:15 pm UTC

This post raises a few fundamental issues. The querant obviously has genuine issues, and equally obviously does NOT use English as his first (or even second) language.

I regularly post in the Oracle forums on Tek-Tips (unashamed plug there) and I can smell a leech a mile off - ditto students looking for homework freebies.

Mohammed does not come across as a leech because he took the trouble to post execution plans. This demonstrates beyond all doubt that he cares enough about his job to read a manual in a foreign language, apply it, and correctly post the results. Although he is equally obviously unable to post a concisely defined problem (which I therefore believe identifies him as a newbie), he has also made the effort to "ask Tom", thereby demonstrating that he has the wit to ask an internationally renowned "world-class" punter for advice - this is not normally indicative of incipient idiocy.

Mohammed has posted several times, in what appear to be desperate pleas for help. He may be one of those unfortunate few, in a far-off place, who have been chucked in at the deep end of Oracle, and is floundering about, looking for straws at which to clutch.

I suspect, but freely admit to being a tad too generous, that Mohammed is one of those rare cases where patience above and beyond the call of duty is merited.

However, I have to agree with previous posts, some clear cut questions would be much appreciated by us all, not least the illustrious Mr Kyte.

Tom, what are the "rules of cricket" concerning such querants?

Regards

J

Good one John Hawksworth

A reader, April 21, 2008 - 1:01 pm UTC

I Totally agree with Mr John Hawksworth .

Guys please show some Humility like Tom.
We should not get arrogant because we have little knowledge and others dont have it or are trying to gain some.

If you are puffed up with your ability note that there is always a TOM,DICK or HARRY who knows much more than you and can tame you anytime.

Give Newbies a chance to express.
Sometimes language barriers due to literal word to word translation make it sound rude but that may not be the intention.

Poor mohamed

Alejandro, April 21, 2008 - 8:29 pm UTC

The only problem of Mohamed is he doesn´t speak english, he is also newbie in the performance hints, it´s obvious that he has a real performance problem on that queries he sent.

1'600.000 LIOS! many times! come on!

it's obvious.

It´s simply, if you can´t touch you're poor design, make better indexes. Do amazing pretty focused indexes on your 80% percent of your sql_area cost, (many times the 80% of the cost (measure elapsed_time applies for me),relies on two or three queries, so go on fix it).

Wow... That made my day...

NOTNA, April 22, 2008 - 12:38 am UTC

WOW... This really made my day... the part where Tom said "eh", i almost fell from my chair... hehehhehee

Tariq Lakho, April 22, 2008 - 7:40 am UTC

1.Where should we user CBO and RBO & Choose?
2.Reverse key index (where should We apply)
Please give examples.
Tom Kyte
April 23, 2008 - 5:48 pm UTC

1) use the cbo, period. (that was easy)

2) see my book "Expert Oracle Database Architecture" or "Expert one on one Oracle" (or simply, well, sort of "search" on this site)

http://asktom.oracle.com/pls/ask/search?p_string=%22reverse+key+index%22

This is not a newbie issue

A reader, April 22, 2008 - 10:00 am UTC

The issue at hand is neither newbie nor language. Tom has answered newbie questions on this site with almost a super-human patience. In few questions, Tom even translated or guessed at foreign words for the sake of answering questions. I have been a regular reader of this forum for the last 5 years.

The real problem is that the poster asked questions which have a broad range of answers which could easily cover few chapters. It is very easy for a newbie to attach all the RDA/AWR/ADDM/tkprof reports and ask why my system is slow. It is possible to answer the question but it would be time consuming and outside of the scope of this forum.

Assists

Tariq Lakho, April 23, 2008 - 12:45 am UTC

Dear Reader,
I apologize but I read Tom topic mostly.
Can you please assist me how can I ask these type of questions?
Can you please give any link where my desire topic exists along with examples?

Thanks.

performance tuning

sachin, May 29, 2008 - 3:22 am UTC

Dear Sir

I wanted to tune the below query.I have created indexes on the above qury.Below is the query
Could you please tell me how to tune the query
select to_char(stmtdt, 'yyyy') as ID, to_char(stmtdt, 'yyyy') as year
from statement
where borrid= &var_borrId and user_id= &var_user_id and coaid = &var_mdlid
and to_char(stmtdt, 'yyyy') in
(
select os.year from
(
select borrid, year, count(*) as cntActuals
from subjective_prm_trans
where borrid= &var_borrId and user_id= &var_user_id and mdlid = &var_mdlid and endnodeflag = 'E'
group by year, borrid
) os,
(
select j.borrid, j.year, j.cntVariable + k.cntFixed as cntMdlTotals
from
(
select a.borrid, a.year , count(*) as cntVariable
from subjective_prm_trans a, mdl_Parameter_Tree m
where
a.prmid = m.parentid and a.mdlid = m.mdlid and
a.endnodeflag = 'N' and a.value between 0.0001 and 1 and
a.prmid not in (select r.prmid from mdl_parameter_tree r where trim(lower(r.prmname)) = 'project risk' and r.mdlid= &var_mdlid ) and
a.borrid= &var_borrId and a.mdlid= &var_mdlid and a.user_id= &var_user_id
group by a.borrid , a.year
) j,
(
select count(*) as cntFixed
from mdl_parameter_tree u
where u.prmid not in
(
select t.prmid
from mdl_parameter_tree t
where t.rootnode in
(
select b.rootnode
from subjective_prm_trans a, mdl_parameter_tree b
where
a.mdlid = b.mdlid and a.prmid = b.prmid and
a.endnodeflag = 'N' and a.value between 0.0001 and 1 and
a.prmid not in (select r.prmid from mdl_parameter_tree r where trim(lower(r.prmname)) = 'project risk' and r.mdlid= &var_mdlid ) and
a.borrid= &var_borrId and a.mdlid= &var_mdlid and a.user_id= &var_user_id
) and
t.endnodeflag = 'E' and parametertype = 'S' and mdlid= &var_mdlid
) and
u.endnodeflag ='E' and
u.parametertype = 'S' and
u.mdlid= &var_mdlid
) k
) om
where
os.borrid = om.borrid and
os.year = om.year and
os.cntActuals = om.cntMdlTotals
)
order by year desc

performance tuning

sachin, May 29, 2008 - 3:30 am UTC


dbf ile sequential read and alter table parallel 4

A reader, September 24, 2009 - 6:45 am UTC

Dear Tom,

I was having a huge
insert /*+ append */ into t1 select from distant@table

which was taking more than 2H00 and 30 minutes to complete. A 10046 trace file when profiled shows that

db file sequential read is about 94% of toal response time

What we did is the following

alter table t1 parallel 16;

and the same select/insert took 9 minutes to complete.

Could you please explain me what this alter table produced so that the insert has been so fast.

Thanks a lot for your always precious answer
Tom Kyte
September 29, 2009 - 7:37 am UTC

give me a bit more detail here.

what indexes might you have in place there.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.