Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question.

Asked: August 09, 2002 - 8:21 am UTC

Last updated: May 08, 2017 - 9:37 am UTC

Version: 9i

Viewed 100K+ times! This question is

You Asked

Tom : I' ve recently read an article about Performance on Dbasupport.com an i couldn't believe what i was reading about commits
I usually read your answers on this site and I' ve read your book as well and you always suggest to not commit fequently. I completly agree with you, even i' ve compares each other.
Below you have an extract of the article. Please i would like what you think about the author suggestion regarding commit.


Tnank in Advance
Pablo from Argentina


</code> http://www.dbasupport.com/oracle/ora9i/performance/tuning-2.shtml#commit <code>
"Issue Frequent COMMIT Statements



Whenever possible, issue frequent COMMIT statements in all your programs. By issuing frequent COMMIT statements, the performance of the program is enhanced and its resource requirements are minimized as COMMIT frees up the following resources:



Ø Information held in the rollback segments to undo the transaction, if necessary



Ø All locks acquired during statement processing



Ø Space in the redo log buffer cache



Ø Overhead associated with any internal Oracle mechanisms to manage the resources in the previous three items"




and Tom said...

I'll be blunt

The author is totally wrong here.



Frequent commits -- sure, "frees up" that undo -- which invariabley leads to ORA-1555 and the failure of your process. Thats good for performance right?

Frequent commits -- sure, "frees up" locks -- which throws transactional integrity out the window. Thats great for data integrity right?

Frequent commits -- sure "frees up" redo log buffer space -- by forcing you to WAIT for a sync write to the file system every time -- you WAIT and WAIT and WAIT. I can see how that would "increase performance" (NOT). Oh yeah, the fact that the redo buffer is flushed in the background

o every three seconds
o when 1/3 full
o when 1meg full

would do the same thing (free up this resource) AND not make you wait.

o frequent commits -- there is NO resource to free up -- undo is undo, big old circular buffer. It is not any harder for us to manage 15 gigawads or 15 bytes of undo. Locks -- well, they are an attribute of the data itself, it is no more expensive in Oracle (it would be in db2, sqlserver, informix, etc) to have one BILLION locks vs one lock. The redo log buffer -- that is continously taking care of itself, regardless of whether you commit or not.


that article bullet point #12 is wrong, wrong, wrong. So wrong.... So very very wrong.




Rating

  (45 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Statement on Cache Buffer

Robert Chin, August 09, 2002 - 11:43 am UTC

Tom, the author, in Part-1 of the article, under #3"Shared SQL Statement",

</code> http://www.dbasupport.com/oracle/ora9i/performance/tuning-1.shtml#truncate <code>

stated
"Unfortunately, the cache buffering is applied only to simple tables; multiple table queries and joins are never cached."

Is this true ?

also he said, which I find odd...

"By default, ORACLE uses CHOOSE optimizer mode. To reduce the potential for unplanned full table scans, you should avoid using the CHOOSE option; either use the RBO or the CBO throughout your database."

Please clear the smoke for us
Thanks


Tom Kyte
August 09, 2002 - 12:32 pm UTC

he is wrong, totally wrong.

there is more wrong information that article then correct.

it is trivial to see that statements involving many tables are cached. select sql_text from v$sql will show you this.

Tom, U R The Man!

Mr. 420, August 09, 2002 - 3:58 pm UTC

:-) Thanx Tom for making sense out of that nonesense article and correcting the incorrect information. Those who come to your web-site are truly the lucky ones. You ARE AWESOME! :-) Please keep up the good work, and helping out the otherwise lost souls in this vast realm of database.

It is so confusing

netbanker, August 09, 2002 - 4:36 pm UTC

Hi, Mr.Toracle,

It is so surprise to read that frequent commit does not acturally enhance the performance of db such as locking,redo log...etc.

So, what is your recommendation towards commit frequency problem when designing the application?

Tom Kyte
August 09, 2002 - 5:05 pm UTC

It is not confusing at all (well it should not be)

There is only one thing that dicates when you commit -- when is your transaction complete? Then you commit, period.


Frequently commiting in the mistaken belief that you are conserving resources or making things go faster only does this for you:

o slows you down, yes, that is right, SLOWS YOU DOWN -- makes you run SLOOOWWWWEEERRR

o doesn't conserve any resources, in fact, it consumes more (you generate MORE redo)

o seriously puts into question the integrity of your data


If you want to hear me wail on about this for many pages -- pick up my book "expert one on one". The "proofs" to what I say are in there in total.

Visit steve adams website as well </code> http://www.ixora.com.au/ <code>he talks about this as well


Articles Verification

Basharat, August 10, 2002 - 3:01 am UTC

Tom, thanks for making it all clear. At the same time I will suggest that DbaSupport.com must verify the content of articles before publishing them, otherwise, this site will lose its credibility.

Tom Kyte
August 10, 2002 - 9:28 am UTC

Hah, good luck. (we have somewhat the same issue from time to time on the Oracle magazine site -- that is exactly how asktom got started actually).

The only way to do what you propose would be to make everyone prove scientifically their claims -- in a manner similar to what I do here. When I say something, I generally try to back it up with some reproducible test case.

For example, in my book -- i have the clear and explicit test cases that shows (proves) committing frequently

o consumes additional resources
o runs slower
o leads to serious data integrity issues
o is a waste of a developers time






One issue

Arindom Dam, August 15, 2002 - 10:20 am UTC

Your description makes total sense..
However; there's one issue that would like to have your thoughts on ...

What do you propose on a large transaction running out of rollback segment space ? It seems like the only way around is to have some level of commit frequency ?

Thanks.

Tom Kyte
August 15, 2002 - 7:12 pm UTC

Nope, I feel (strongly) that your rollback segments should be sized for all of your day to day transactions. Period.

If you have a large transaction (keyword = transaction) then by definition you cannot commit. Else, by definition, you have lots of little itty bitty transactions, not a big one. If data integrity is something you care about and you want to avoid writing TONS of procedural code -- you will right size your rollback segments and just do it.

(you know, for those "big" one time transactions, I would just keep a closet full of 36gig drives, they cost about USD $250. Just plug one in, use it and throw it away when you are done. It'll be more cost effective then writing the one time procedural code that needs to be restartable (for WHEN it fails) and not update the same row twice (which typically corrupts the data).

More of the same ...

Dave, August 15, 2002 - 12:17 pm UTC

</code> http://www.dbasupport.com/oracle/ora9i/performance/tuning-9.shtml <code>

"49.GROUP BY and Predicate Clauses" completely misses the point of the HAVING clause, and is trivial.
"51.Use Explicit Cursors" seems misguided
"53.Table and Index Splitting" is downright wrong

Those articles are riddled with error and urban legend, it seems

Tom Kyte
August 15, 2002 - 7:19 pm UTC

51 is my personal favorite.

Just fyi, the author emailed me asking for comments -- I commented on 1-26 and sent him my feedback point by point. We'll see what happens with it now.

(49 is a copy of 14 btw more or less)

A reader, August 21, 2002 - 12:15 pm UTC

Tom, Can you please post those commets (about the whole article) so it is beneficial to everyone. Thanks for the priceless service you are doing to the Oracle community!

Tom Kyte
August 21, 2002 - 12:21 pm UTC

Here is the bottom line:

ignore all the points in that article as it exists now. I was a little "harsh", eg: straight forward, perhaps in my review directly to him and do not feel like cleaning it up for posting here.

I told him he should retract it all right away and then republish it later as series of articles that debunk many of the "myths" from the tuning books out there. He would use the approach of scientific proof. He has choosen to not take that path, although he did say he would rewrite and post a corrected version in 3 or 4 days but that was 5 days ago.

so for now, pretend that series of articles doesn't exist.

Confused here...

Ik, September 16, 2002 - 4:22 pm UTC

Tom,
Saw the message posted on your site's front page and thus went through the threads of discussion.
On my applications, both PL/SQL and Pro*C, we do a COMMIT after every 1000 records and changes status flag for the processed records. Thus next time when the program runs (if previous run was aborted due to error), it does not pick up the processed records.
Thats the scenario. Do you feel that what we are doing is BAD and that COMMIT should be only at the end of the process. The number of records to be processed are in millions.

regards

Tom Kyte
September 16, 2002 - 8:22 pm UTC

I might be more in favor of a time based commit (Steve Adams has a write up of this on his site, commit say every N minutes) rather then N records. 1000 is sort of small, 100,000 or more would make me feel better.

You have a sort of special case -- you are easily restartable and apparently you do not get an ora-1555 so you either

a) have code that selects 1000 records, processes, commits and starts over
b) rollback segments big enough that they do not wrap -- meaning you could do this in one go after all
c) don't query the table you are processing (seems unlikely)
d) have been unusually lucky.


999 times out of 1000 people do not make their stuff restartable and boy do they get cranky when they hit the "ora-1555"



Author Removed Pages :)

Dwayne Croteau, September 18, 2002 - 11:41 am UTC

Hey Tom,

Mission Accomplished!! - those pages are gone !!


Avoiding Commits when processing large volumes of data

Denise, April 29, 2003 - 3:50 pm UTC

As one who has experienced the dreaded ora-01555 "snapshot
too old" error message when updating large number of rows..
I stopped the practice of committing 'x' number of records
during the Loop-fetch cycle.

I know many say that if Rollback Segments are sized adequately and there are enough of them then you should
be able to process large numbers of rows(I'm speaking of
millions here...) with no problems.

What has worked for us is I have created one huge massive
Rollback Segment that I keep offline. When I know I have
a large update to run after business hours..I will take
the regular RS offline and switch on the massive RS and
run my script,query or procedure..it works everytime and
I never have a problem completing the task. When the entire
process is done...I COMMIT;






****** <- six stars for the article

Yavor Ivanov, May 23, 2003 - 8:36 am UTC

Why using frequent commits? Just tear-off the transaction processing capabilities from oracle! No transactions - no overhead. Ha. Ha. Ha.

One of the 3 basics of an transaction is _completeness_. Synonym of _integrity_

Confused about frequent commits Need your Help!

Alan sinsuan, July 23, 2003 - 10:12 pm UTC

Tom,
Here is the situation, I'm doing an insert/select statement that would insert about 1.5 million records and I keep on hitting cannot extend rollback segment, but our dba told me to break my statement into a small records and do a commit on each statement. Below is what he says "
If you are really inserting 1.5 million records in one transaction, that is not recommended. If all these records are going to the same table, then each record represents a transaction. The concept of a transaction is that, related information in "different" tables should be managed as a group. When a set of related records has been inserted then a "commit" should occur to make the inserts permanent." Can you guide me here? Thank you very much and more power.

Tom Kyte
July 24, 2003 - 8:37 am UTC

ask the DBA -- so what happens when I've inserted 100,000 records and the process fails.

Now, I've got 100,000 records loaded, 1.4 million more to go -- but tell me -- how pray tell do I restart my transaction? the source data set is going to want to give me those 100,000 records again.

You can either spend 1,000's of $$$$ on your time writing tons of procedural code to try and solve a problem that should not exist OR

the DBA can do theirs and size your system properly. To bulk load 1.5 million records should probably be done in this fashion:

a) disable the indexes
b) insert /*+ APPEND */ into the table
c) rebuild the indexes

that will generate no undo of any sizable amount. If you are in ARCHIVELOG mode and coordinate with your DBA, you can even set the tables/indexes to nologging, skipping REDO generation, and then have them schedule a backup.

9ir2 documentation

bob, August 05, 2003 - 2:38 pm UTC

Tom,

In the rollback section of the 9iR2 admin guide (assuming you are still using rollback), it says:

Approximate Rollback Segment Sizes
with regards to snapshot too old error:

This error occurs because the rollback entries needed for read consistency are overwritten as other update entries wrap around the rollback segment. Consider this issue when designing an application's transactions, and make them short atomic units of work so that you can avoid this problem.
-----
Doesn't this "short atomic unit of work" hint at the commit frequently mentally? Should this instead say, "when the transaction is done".. which isn't unclear at all.

That same section discusses the "use different rbs for different types of transactions" scenario.

When database systems applications concurrently issue a mix of very short and very long transactions, performance can be optimized if transactions are explicitly assigned to a rollback segment based on the transaction/rollback segment size. You can minimize dynamic extent allocation and truncation for rollback segments. This is not required for most systems and is intended for extremely large or small transactions.

To optimize performance when issuing a mix of extremely small and large transactions, make a number of rollback segments of appropriate size for each type of transaction (such as small, medium, and large). Most rollback segments should correspond to the typical transactions, with a fewer number of rollback segments for the atypical transactions. Then set OPTIMAL for each such rollback segment so that the rollback segment returns to its intended size if it has to grow.

You should tell users about the different sets of rollback segments that correspond to the different types of transactions.
**Often, it is not beneficial to assign a transaction explicitly to a specific rollback segment.**

However, you can assign an atypical transaction to an appropriate rollback segment created for such transactions. For example, you can assign a transaction that contains a large batch job to a large rollback segment.



Tom Kyte
August 05, 2003 - 2:56 pm UTC


...
Consider
this issue when designing an application's transactions, and make them short
atomic units of work so that you can avoid this problem.
.......

i would consider that a "bug". Not that it is wrong for a transaction should be:

a) as LONG as it needs to be
b) as SHORT as it can

that is -- the transaction defines its boundaries and NOTHING else. not rollback, nothing. a transaction is a transaction and should not be artificially extended or shortened.

But to say that short transactions will solve this problem isn't 100% accurate -- unless you make the QUERIES themselves short and I think that is what they meant. "Transaction" includes "read"






It all depends on the required rollback behaviour...

Manohar, August 06, 2003 - 4:12 am UTC

Tom,
I think, when doing bulk updates, the required rollback behaviour is also important in defining "Transaction".
different scenarios:
1)insert into A select * from B; (Either do the whole thing or rollback completely)
2)for c in select * from B loop
Begin insert into a (1,2..) values (c.1,c.2..);
exception when others then null
end;--(continue with inserting other records..)
3)We cal also use Bulk insert if thats the desired rollback behaviour.
What i wanted to say is that we rollback behaviour also affects how we define transaction.

oracle doc error on this?

bob, June 21, 2004 - 1:26 pm UTC

Tom,

9.2 Utilities guide on Import says

"Specifying COMMIT=Y prevents rollback segments from growing inordinantly and improves performance of large imports"...

Ok, rollback/undo doesn't grow inordinality with these frequent commits, but "improve performance"? Must be a doc bug?

Tom Kyte
June 21, 2004 - 8:10 pm UTC

yes, it is.

A reader, August 29, 2005 - 3:24 pm UTC


Commiting frequently

vishal mehrotra, October 31, 2006 - 4:05 am UTC

Tom,

This article cleared many of our myths regarding frequent commits.

Frequent Commits

Peeush Trikha, October 31, 2006 - 4:11 am UTC

Mine too .

A reader, July 09, 2007 - 11:53 am UTC

Tom,

Why does Frequent commit use MORE redo?

I can see the Redo will have AFTER image and BEFORE image (from undo) from an update statement for example, but the volume of data would probably be the same if I commit once per row or after a transaction?

I am assuming volume is same because its the same AFTER and BEFORE images independent of frequency of commit.

I can understand clearly about extra CPU work and extra consistent gets that could be consumed from frequent commits but not why extra REDO. I think you have a clear test case in your first book.

Thanks

Ravi


Tom Kyte
July 09, 2007 - 2:19 pm UTC

because every commit generates redo (the act of issuing "commit" does)

because every transaction has a "wrapper" around it, a data structure, so there is the "we are beginning - we did this - we are ending" stuff, you repeat the we are beginning/ending over and over.

take a script like this:

it inserts 1000 records into an empty table T. Either we do it right and commit once, or we do it wrong with sqlplus autocommit and do it after every insert:


drop table t;

create table t ( x int );

set termout off
set heading off
set feedback off
spool x.sql
select 'insert into t values ( 1 );' from (select level l from dual connect by level <= 1000 );
spool off
set heading on
set feedback on
set termout on

@mystat "redo size"
set autocommit off
set termout off
@x.sql
set termout on
commit;
@mystat2

truncate table t;

@mystat "redo size"
set autocommit on
set termout off
@x.sql
set termout on
@mystat2
set autocommit off


Now, when you execute that:

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set termout off
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @mystat "redo size"
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                   11216

1 row selected.

ops$tkyte%ORA10GR2> set autocommit off
ops$tkyte%ORA10GR2> set termout off
ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                  256956          245,740

1 row selected.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> truncate table t;

Table truncated.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @mystat "redo size"
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
redo size                  262324

1 row selected.

ops$tkyte%ORA10GR2> set autocommit on
ops$tkyte%ORA10GR2> set termout off
ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
redo size                  803072          540,748

1 row selected.

ops$tkyte%ORA10GR2> set autocommit off


245k vs 540k in that case...

Well, but what about bitmap index deadlocks?

Fred, August 30, 2007 - 10:22 am UTC

After reading this thread and recreating the demos in Chapter 9 of Expert Oracle Database Architecture, I am a convert to the Frequent-Commits-Are-Bad school.

But...

Recently a client asked me to investigate a deadlock issue. The root of the problem was a piece of code that looped over a (large, but not huge) number of inserts into a table containing several bitmap indexes and then committed at the bottom of the loop. I advised the client to commit more frequently, to lessen the likelihood of deadlock.

So I'm wondering:

1. Am I correct to believe that the longer the transaction on the bitmap-indexed table stayed open, the greater the likelihood of the session deadlocking itself?

2. Would it have been better advice to insist that bitmap indexes were inappropriate for this table, and to advise replacing them with btrees?
(Seems obvious the more I think about it, but maybe I'm just looking for a dope-slap from Tom this morning).
Tom Kyte
September 04, 2007 - 5:18 pm UTC

single row modifications to a bitmap index are the kiss of death - period.

better would be to advise the client to think in sets, lose the procedural code and do the modifications to the bitmap index in a single SQL statement.

and then commit.



1) the session would never deadlock itself as you describe, you need two transactions to deadlock, you only have one.

2) not so fast - we cannot answer that - I can only say "single row modification = bad for bitmaps, use SET operations and do it in a single DML"

Bitmap indexes

Billy, September 04, 2007 - 6:29 am UTC

Fred, you're on the mark with the 2nd point - bitmap indexes are less than ideal when dealing with the typical OLTP table that is CRUD concurrently.

Never mind the locking issue (remember that a single bitmap index entry covers a range of rows), there is the growth issue to consider. Frequent updates makes a bitmap index looks bigger and uglier than Jabba the Hut.

Excelent

Brian Tkatch, December 05, 2007 - 9:53 am UTC

Ecellent article Tom. It only proves i have so much more to learn.

I first bumped into this issue with DELETEs. I would just like to ask for verification here.

If a TABLE has a three million records in it, and i must DELETE one million records based on an INDEX, it has been my experience that DELETEing ten thousand records at a time (with a COMMIT) seems to go much faster than DELETEing them all at once.

I thought (incorrectly) that it took so long because it had to store the redo with became geometrically harder with each new record.

Is the DELETE normally faster in chunks for the same reason, that the redo logs were not big enough?
Tom Kyte
December 10, 2007 - 7:57 am UTC

... and i must DELETE one million
records based on an INDEX, ...


that is oxymoronic. You never "must delete 1,000,000 records based on an index"

if I were to delete 1/3 of the rows of a table, I would almost certainly prefer a full scan.


"seems" - bah, humbug. prove it, test it, time it, show me your PROCEDURAL CODE working faster than my simple:

delete from t where <condition>;



Urge to Kill rising

Scott H., January 17, 2008 - 11:30 am UTC

So, my boss asks me to add some code to a procedure I've never touched before. I go into it and take a look. "13 commits in one back end procedure" My mouth dropped, I asked why. The other developers response, "Well, next time you run it and fix the errors, that will run." My response, "Do you even have concept of what a transaction is?"

Not a question, just a gripe. This makes me want to rewrite our entire EDI system!!!

-Scott
Tom Kyte
January 19, 2008 - 10:20 pm UTC

I feel your pain.

frequent commits required for LOB inserts(?)

Zoe, January 20, 2008 - 3:58 am UTC

I'm using PL/SQL to load Multimedia LOBs into the database over the Web. I have no idea ahead of time if a user will load one file or 100 files. I've found that PL/SQL is the most consistently accurate way to load files and it works like a charm in a single-user environment. It works good when 5 users a simultaneously uploading also. But when I start getting 20+ users all uploading at the same time, the COMMIT frequency goes through the roof. This is because each LOB insert does a "SELECT ...FOR UPDATE" and has to issue a commit before running again for the next file.

IS there a "BULK COLLECT" type PL/SQL trick to use with LOBs (besides SQL*Loader) or am I stuck with committing after each LOB insert?


Tom Kyte
January 20, 2008 - 8:01 am UTC

you would need to be a tad more clear here - there is no reason you HAVE to issue a commit, you must be wanting to or have programmed it that way.

select for update doesn't force you into committing, you are doing that.

and why do you think the commit frequency going through the roof (which I doubt, if you are loading lobs, I would hazard a guess that your big waits are for physical IO on the lob tablespace as by default they are direct pathed, not cached) is the cause of any performance related issue. What analysis did you do to see that "commit frequency" is the cause of any issues?

waiting to commit...

Zoe, January 20, 2008 - 11:16 am UTC



Thanks Tom for your reply.

I noticed several wait events over the past few days on "commit" during a heavy load period (about 12 users simultaneously uploading files). I/O is a problem as well, like you said, and I'll be looking into that also.

I think I can address the commit Wait Events by taking out the explicit commit; in my procedure and just call it several times per session. I'll hit the test machine this afternoon and give it a whirl.

The way I'm doing it now just seems to go against everything this thread has taught me so far - and issuing a commit; after each LOB insert doesn't seem scalable or performant at all. I'm hoping to improve the process if possible.

I also read in your Expert Database Architecture book, on page 552 (last paragraph under Cache Clause heading), "...you can make excellent use of the Keep and Recycle Pool here..." (in reference to enabling caching of LOBs during a large initial BLOB load)

Can you please elaborate a bit to show how (and if) the RECYCLE POOL could be utilized to write LOBs to the cache first instead of directly to disk? Is it just in the LOB storage clause, specifying BUFFER_POOL RECYCLE. Or can we only use the DEFAULT POOL for writing to during a load?

Thanks Tom,
Zoe
Tom Kyte
January 20, 2008 - 12:13 pm UTC

how did you notice this - what did you measure, how did you measure it, and what were the measurments?




You can use any buffer pool you want to cache the lobs, you just use the segment attributes on the lob segment either during create or via an alter.

metrics

Zoe, January 20, 2008 - 1:17 pm UTC

Thanks Tom.

I have observed the following from the OEM Database Control Console:

 Details 
 Jan 11, 2008 2:01:06 AM Metrics "Database Time Spent Waiting (%)" is at 97.74354 for event class "Commit" - 
 Dec 31, 2007 8:01:08 PM Metrics "Database Time Spent Waiting (%)" is at 94.14906 for event class "Commit" - 
 Dec 23, 2007 11:01:06 PM Metrics "Database Time Spent Waiting (%)" is at 93.8445 for event class "Commit" - 
 Dec 24, 2007 11:01:29 PM Metrics "Database Time Spent Waiting (%)" is at 93.39415 for event class "Commit" - 
 Jan 11, 2008 9:01:12 AM Metrics "Database Time Spent Waiting (%)" is at 93.37386 for event class "Commit" - 
 Jan 8, 2008 5:02:00 AM Metrics "Database Time Spent Waiting (%)" is at 92.55847 for event class "Commit" - 
 Jan 19, 2008 4:01:11 AM Metrics "Database Time Spent Waiting (%)" is at 91.88583 for event class "Commit" - 
 Dec 24, 2007 2:01:20 PM Metrics "Database Time Spent Waiting (%)" is at 91.44676 for event class "Commit" - 

... and drops by about 1% all the way down to 

 Dec 25, 2007 7:08:49 PM Metrics "Database Time Spent Waiting (%)" is at .18437 for event class "Commit" 


I've done some tests and whenever I do a multi-user loading run (12+ users continuously loading files) for about 5-7 minutes, I notice the Waiting % usually hits between 80% and 97%. Of course, my I/O Waits are horrible too because I'm in RAID5 for everything and using NOCACHE. Maybe this is what's causing the Wait on Commit. I'll look deeper into that.

I suppose I could reset the threshold to make this go away; but not sure if that'd be sweeping it under the carpet.

Thanks again for your valuable input,

Zoe


Tom Kyte
January 20, 2008 - 1:41 pm UTC

97% of *what*

and I'd want to know if my application was waiting on this - so I'd be looking into a tkprof OR some ASH information.

You'll want to quantify the "of what", 97% of 1 second or 1,000,000 seconds.

testing on frequent commit

A reader, February 04, 2008 - 2:10 pm UTC

Hey Tom:

    I am reading expert one on one recently. There a samll test on the book about commit frequently and commit once. the time for frequent commit is 21 hsecs. the time for commit once is 7hsecs. I set up my own test as following, but get the similar time. am I doing something wrong?

set timing on
spool commit_frequent.log
truncate table test;

insert into test(da_id, source_code)
select da_id, source_code
from mydata;

commit;

DECLARE
        v_new_da_id             number := 5;
        v_err_msg               varchar2(4000);
        v_err_ctr               number := 0;
        v_ctr                   number := 0;
        l_start                 number default dbms_utility.get_time;
        CURSOR cur IS
        select * from test;

BEGIN
        FOR x in cur LOOP
        BEGIN
        v_new_da_id := 0;
        v_err_msg := null;
        v_err_ctr := 0;
        v_ctr := 0;

       if x.source_code = 'BOOK'
        then
          v_new_da_id := 1;
        else
          v_new_da_id  := 0;
        end if;


        update test
               set da_id = v_new_da_id
        where  da_id = x.da_id;

        commit;

        EXCEPTION
        when others then
        v_err_msg := sqlerrm;
        v_err_ctr := v_err_ctr + 1;
        dbms_output.put_line('ERROR ON: '||x.da_id||' ERR MSG => '||v_err_msg);
        
        commit;

        END;
        v_ctr := v_ctr + 1;
        END LOOP;
dbms_output.put_line(dbms_utility.get_time-l_start || 'hsecs ');
END;
/

***********************************************
SQL> @commit_frequent.sql > commit_frequent.log

Table truncated.

Elapsed: 00:00:00.09

10000 rows created.

Elapsed: 00:00:00.03

Commit complete.

Elapsed: 00:00:00.04
1525hsecs

PL/SQL procedure successfully completed.

Elapsed: 00:00:15.27
*****************************

set timing on
spool commit_once.log
truncate table test;

insert into test(da_id, source_code)
select da_id, source_code
from mydata;

commit;

DECLARE
        v_new_da_id             number := 5;
        v_err_msg               varchar2(4000);
        v_err_ctr               number := 0;
        v_ctr                   number := 0;
        l_start                 number default dbms_utility.get_time;
        CURSOR cur IS
        select * from test;

BEGIN
        FOR x in cur LOOP
        BEGIN
        v_new_da_id := 0;
        v_err_msg := null;
        v_err_ctr := 0;
        v_ctr := 0;

       if x.source_code = 'BOOK'
        then
          v_new_da_id := 1;
        else
          v_new_da_id  := 0;
        end if;


        update test
               set da_id = v_new_da_id
        where  da_id = x.da_id;

        

        EXCEPTION
        when others then
        v_err_msg := sqlerrm;
        v_err_ctr := v_err_ctr + 1;
        dbms_output.put_line('ERROR ON: '||x.da_id||' ERR MSG => '||v_err_msg);
        
        commit;

        END;
        v_ctr := v_ctr + 1;
        END LOOP;
dbms_output.put_line(dbms_utility.get_time-l_start || 'hsecs ');

comit;

END;
/
******************************************
SQL>  @commit_once.sql > commit_once.log

Table truncated.

Elapsed: 00:00:00.10

10000 rows created.

Elapsed: 00:00:00.03

Commit complete.

Elapsed: 00:00:00.04
1469hsecs

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.71
********************************************

Tom Kyte
February 04, 2008 - 5:00 pm UTC

plsql has a commit time optimization that pretty much makes it not useful to demonstrate this. I guess what my example was showing was the slow by slow versus "really fast, do it all at once"

(I hate your exception block by the way, I hope you do NOTHING remotely similar to that in real life...)


You'd have to use the synchronous commit to see this effect in plsql - it is what you would see in all other languages outside the database - the wait for a log file sync...

ops$tkyte%ORA10GR2> begin
  2          for x in ( select rowid rid from t )
  3          loop
  4                  update t set object_name = reverse(object_name) where rowid = x.rid;
  5          end loop;
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.67
ops$tkyte%ORA10GR2> begin
  2          for x in ( select rowid rid from t )
  3          loop
  4                  update t set object_name = reverse(object_name) where rowid = x.rid;
  5                  commit work write immediate wait;
  6          end loop;
  7  end;
  8  /


PL/SQL procedure successfully completed.

Elapsed: 00:03:43.49
ops$tkyte%ORA10GR2>

question about reply

prayerhuang, February 05, 2008 - 9:51 am UTC

Tom, thank you very much for your quick response!

I have some questions about ur reply:
(I hate your exception block by the way, I hope you do NOTHING remotely similar to that in real life...)

what does this mean? many of our production scripts are in this format(the format I used). But normally, we run them directly on the server. Is this good or not? If this is not appropriate, what kind of format we should use. We run scripts to load data, and update the existing data. And what does the remote mean?

sorry too much question :P. Thank you!

Tom Kyte
February 05, 2008 - 10:30 am UTC

"UR" made no replies - can you be more specific what you might mean by "UR", it has to be a name - I cannot find it in the dictionary.

unless you are talking about http://www.m-w.com/dictionary/ur - but I doubt it.




Answer me this: why do you code:
        EXCEPTION
        when others then
        v_err_msg := sqlerrm;
        v_err_ctr := v_err_ctr + 1;
        dbms_output.put_line('ERROR ON: '||x.da_id||' ERR MSG => '||v_err_msg);

what do you gain from that? What did you accomplish. Why why why did you catch it - if you cannot do a thing about it.


and just dbms_output it, man, you hit an error - you have no idea what the error is, but you continue on anyway. You "log it" to a terminal window. I fail to understand the "logic" here - you might as well just erase all of the code between the begin and exception - you don't care if it runs (it is true, you don't - it is OK for it to fail, you just continue on).

More WHEN OTHERS madness

Stew Ashton, February 05, 2008 - 10:53 am UTC

"You just continue on...", if you're lucky.

Yesterday, someone ran some PL/SQL from a terminal. It included
EXCEPTION
WHEN OTHERS THEN
RETURN FALSE;
The code failed, but the transaction did not end because the exception was caught and not re-RAISED.

So the transaction is still there, holding on to a bunch of UNDO. And guess what? No one else can update the database because there is no UNDO free!

So because of this coding practice, no one can "continue on".

True story, happening right now.

prayerhuang, February 05, 2008 - 11:54 am UTC

it is you + r = your, I use "u" for you, so ur for your...
sorry to make you confused.

and since we update many data every day. most of the time, the new data we need to do the update is available, but still miss some of them. so what we do is record the numbers of error we have, if it is big, we will roll all the way back, and reload it after data is ready. if just couple of errors, we will update the data for specific da_id later when that data is ready. there should be another output for number of error. that is the logic.

for out circumstance, do we need to change the script to commit once or leave it along?(as you said before, the plsql has a commit time optimization that pretty much makes it not useful to demonstrate this. )

and thank you, Stew Ashton, I will keep your case in mind.
Tom Kyte
February 05, 2008 - 1:53 pm UTC

if you are on 10g and above, use LOG ERRORS

if you are not, use SAVE EXCEPTIONS


search for either on this site.


I would not commit until I was done - you said yourself "we roll all of the way back", you won't be doing that if you commit frequently.

Joe Bloggs, May 29, 2008 - 7:27 am UTC

I once had a intervue 4 a company who gave me this snippet ov code (or somefink similar - it was a long time ago!), and asked me wot was wrong wiv it:

DECLARE
  A := 50;
  B := 10;
  --
BEGIN
  C := A + B
  --
EXCEPTION
  WHEN OTHERS THEN NULL;
END;



There answer was that a semi-colon was missing from da middle line. "Ah-ha. U didn't spot that, did u?", 2 witch I replied, "Well I wood nomally let the compiler find doze kinds of mis-takes, but why have u got a 'WHEN OTHERS THEN NULL' in da expection handler?".
"It captures all the errors", they said.


D'oh! I exclaimed. But I got da job. :-)
Tom Kyte
May 29, 2008 - 8:14 am UTC

good, now you can afford the class in spelling....

roflol

MH, May 29, 2008 - 1:44 pm UTC

Again Tom share's his invaluable knowledge :)

what if you cant avoid the commits

Laurence, April 08, 2010 - 6:56 am UTC

Hi,

we have an app here that is an off the shelf package and we cant change it. it does 1.5m inserts all as individual transactions and commits after each insert. clearly this isnt efficient for all the reason above.
but...
we cant change it and its taking hours.... so what can we do to the database config so that IF this type of thing must happen it happens as fast as possible??
Tom Kyte
April 13, 2010 - 8:01 am UTC

you would have to talk to your vendor.

Look - it is not as simple as "we'll just change the fundamental way this program works - we'll just change it's transactions without telling it - and it'll work just like it did before". You cannot do that, no one can do that - it sort of defeats the purpose of transactions if you could - when they say "commit", they meant to say "commit" and a commit must happen - it is the only way.

The only ones that can change this are the developers, they would have to do an impact analysis and ask themselves "what would happen if we committed at the end, or every 5 minutes - what else would we have to change in our code in order to achieve that"


This is why I always wonder why people do deep dive technology benchmarks of things like a database or app server - but they NEVER do that for their apps. They tend to let the "business" pick the app and the business says "ohh, what pretty screens you have" and away you go. And that it doesn't perform or is 'architected' somewhat 'poorly' is never a consideration until it doesn't run.

And then we are supposed to have a magic cureall switch to fix it :(


They don't exist, the magic cureall's




To make log file sync 'better' you either

a) make commits faster (infinitely fast disk only gets you so far).
b) commit less often


You can influence (a) marginally. (B) would have the most impact, but you have no control over that.

you are very right, but

Sokrates, April 14, 2010 - 3:10 am UTC

absolutely correct, 100% agree, the developer of this poor code have to fix that.

( by the way, I hope, since Oracle now owns java, that JDBC-default "autocommit on" will disappear soon )

Though, there might be one thing Laurence from UK could do, Oracle has implemented some magic:
asynchronous commit

He could set up a logon trigger which does a
alter session set commit_write='BATCH,NOWAIT'
( or even set this system-wide )

Of course he must be aware that a COMMIT then has much another semantics than before, but probably he wants to do this.

I just did a quick test which showed a performance gain of factor 7:

SQL> !cat /tmp/c.sql 
spool /tmp/t.sql
select 'variable s number' from dual;
select 'variable t number' from dual;
select 'exec :s:=dbms_utility.get_time' from dual;
select q'<insert into temp values(lpad('a', 2000));>'||chr(10)||'commit;'
from dual
connect by level<=1E4;
select 'exec :t:=dbms_utility.get_time' from dual;
spool off

SQL> set pages 0 lines 200 trimspool on feedback off termout off
SQL> sta /tmp/c
SQL> create table temp(x char(2000));
SQL> set echo off serverout on
SQL> sta /tmp/t
SQL> exec dbms_output.put_line(round((:t - :s)/100,2) || ' seconds.')
49.33 seconds.
SQL> drop table temp purge;
SQL> create table temp(x char(2000));
SQL> alter session set commit_write='BATCH,NOWAIT';
SQL> sta /tmp/t
SQL> exec dbms_output.put_line(round((:t - :s)/100,2) || ' seconds.')
7.52 seconds.
SQL> 


I repeat: a COMMIT now has another semantics than before, one must be aware of that !
Tom Kyte
April 14, 2010 - 9:07 am UTC

but, that changes the transactional semantics

this is an utterly bad idea, one I was well aware of but refuse to talk about since it will end up in my slides as "things I wish we removed" - right after triggers, when others and autonomous transactions

please - do not do this - it'll CHANGE the way the program behaves in the event of a failure - I'll repeat: only the application developers stand a chance of knowing if this works properly or not.


I expected this answer

Sokrates, April 14, 2010 - 9:23 am UTC


but:
the designers of Oracle decided to not only to support

commit work write batch nowait;


but also

alter session set commit_write='BATCH,NOWAIT';


why did they do this ?
I think, in order to help people like Laurence from UK

As long as you know it'll change the way the program behaves in the event of a failure, it's ok, isn't it ?

If you use this approach, it's your responsibility then to decide if you can live with this different behaviour or not.
Tom Kyte
April 14, 2010 - 9:33 am UTC

for programs that support bulk loading. That is about it. Something that has say 1,000,000,000 rows to load and goes like this:

read log table and see where we left off last time (this table could be empty of course, meaning start at the beginning).

seek to record X in the input file as dictated by "where we left off"

loop
   read up to 1,000 records
   insert them
   insert into log table ('processed records ' X ' thru ' X+records-read-1 )
   commit batch,nowait
   X = X+records read
   exit with EOF
end loop


that is it, specifically NOT for OTLP or anything NOT expecting their commits to disappear after being told "you are committed"

As long as the developers who coded the application and have an intimate knowledge of how it works and the ramifications of making the change will affect their application and those developers implement this change - it is OK.

What if they commit/write to file/commit/write to file/commit/write to file

or commit/do anything/commit/do anything/commit/do anything

and some of those commits disappear? Can you say that their application state is still safe? known? predicable? restartable?

only if they anticipated commits "disappearing".


You cannot make decisions for 3rd party applications - they call the shots, you have to live in their constraints, not your world.



The problem I have is this:

a) many people trust what they read
b) many people really trust what they read here
c) if I say once "x is OK" then many people will generalize that to "he said X is ok", even if I said "x is ok under these precise, specific, rare conditions: ....."


So, I will lump this in the category of when others, triggers and the like - truly useful feature, when in the hands of people that know how to use it correctly - which is not generally the universe of developers in general.


sorry, but

Sokrates, April 14, 2010 - 9:35 am UTC

a developper who codes the application does not need

alter session set commit_write='BATCH,NOWAIT';


the only thing he needs to support bulk loading is

commit work write batch nowait;


but also a parameter is supported to change the default commit behaviour
Tom Kyte
April 14, 2010 - 3:40 pm UTC

and I told you what it could be used for - the semantics (no wait commits). I don't care HOW you enable it. Alter session or in every commit in your code - do whatever.

but do not turn it on for just any application, you are changing the rules on them without telling them

The problem

Sokrates, April 14, 2010 - 9:37 am UTC

a) many people trust what they read
b) many people really trust what they read here
c) if I say once "x is OK" then many people will generalize that to "he said X is ok", even if I said "x is ok under these precise, specific, rare conditions: ....."


ok, I agree

SGA

A reader, April 14, 2010 - 1:08 pm UTC

1. Should SGA sizing for DEV/TEST instances be similar to PROD instance? DEV/TEST have much lower tranasction volume.

2. Is it normal/healthy or good practice to bounce the instance once per week - it seems that resets all the SGA stats collected by oracle.
Tom Kyte
April 14, 2010 - 4:14 pm UTC

1) do you want to see what would happen on prod to plans and such in test/dev? If so, you'd want an environment as similar as possible.

2) no, it is not considered a good practice, the opposite of good really.

Seeing weird behavior

Mobra, April 29, 2010 - 5:56 am UTC

I agree that a commit should only be issued when a logical transaction is completed.

However, I am seeing some weird behavior with multi-row updates during a long-running transaction.

I have a fairly simple update statement (UPDATE t SET col1 = ..., col2 = ... WHERE col3 = <some value>).

If I run the update statement in SQL Developer or TOAD, it completes in 5-6 seconds (for about 130,000 rows).

When I run the same update statement as part of a longer process where the same table has been inserted to and/or updated before, the update now takes 2-3 minutes.

The weird thing is if I stick a COMMIT in the middle of the process, before this update statement, the time is reduced to around 30 seconds.

Is there any logical explanation for this? Do you have any pointers as to how I can discover why it is taking so much longer inside the PL/SQL program than outside (or inside with the extra commit)?

I'd like to remove the commit from the middle of my logical transaction, but I want it to perform well, too! :-)
Tom Kyte
April 29, 2010 - 7:46 am UTC

trace it, it is what we would have to do - assuming you provide us with a reproducible test case.

it could be (probably is) related to index maintenance.


bear in mind, this is not a valid reason for even considering placing a commit in there.

What Does This Actually Tell Me

Jim, November 08, 2010 - 12:40 pm UTC

Hi Tom

I have statements like those above, an I am not sure just exactly it is tellimg me in this database (10.2.0.5.1) running on windows 2003 64-bit platform, other than it is waiting to commit
Can you assist me on just what these mean and what I should look at to solve the problem ?

Metrics "Database Time Spent Waiting (%)" is at 60.82248 for event class "Commit" -
Nov 8, 2010 10:10:56 AM Metrics "Database Time Spent Waiting (%)" is at 40.01814 for event class "Commit" -
Nov 8, 2010 10:09:56 AM Metrics "Database Time Spent Waiting (%)" is at 54.40708 for event class "Commit" -
Nov 8, 2010 10:05:44 AM Metrics "Database Time Spent Waiting (%)" is at 35.36435 for event class "Commit" -
Nov 8, 2010 9:35:17 AM Metrics "Database Time Spent Waiting (%)" is at 72.99695 for event class "Commit" -
Nov 8, 2010 9:34:14 AM Metrics "Database Time Spent Waiting (%)" is at 35.65332 for event class "Commit" -
Nov 8, 2010 9:16:23 AM Metrics "Database Time Spent Waiting (%)" is at 73.98091 for event class "Commit" -
Nov 8, 2010 9:15:20 AM Metrics "Database Time Spent Waiting (%)" is at 38.06514 for event class "Commit" -
Nov 8, 2010 7:08:22 AM Metrics "Database Time Spent Waiting (%)" is at 78.3981 for event class "Commit" -
Nov 8, 2010 7:07:19 AM Metrics "Database Time Spent Waiting (%)" is at 41.08747 for event class "Commit"


The only thing i see in alert log is back on 11-3:
Wed Nov 03 22:00:49 Pacific Daylight Time 2010
Thread 1 cannot allocate new log, sequence 1084
Checkpoint not complete
Current log# 3 seq# 1083 mem# 0: K:\ORACLE\ORADATA\CDRL\REDO_03A.LOG
Current log# 3 seq# 1083 mem# 1: I:\ORACLE\ORADATA\CDRL\REDO_03B.LOG


Thanks
Jim
Tom Kyte
November 08, 2010 - 1:09 pm UTC

Mapping scripts

PAGAR, February 09, 2012 - 12:17 pm UTC

Hi Tom,

Hope you are doing great !

I am working on mapping script to migrate data from staging tables into new tables in Oracle 11g.My mapping scripts have INSERT INTO SELECT statments mostly . Please could you let me know how can I do a commit after every 1 million records . I am asking this because , we can have a scenario where Insert into select fails after 1 million 1 record in some case and all inserted data gets rollback . So again we have to start from beginning.

commits for nothing

Steve, April 16, 2013 - 10:09 am UTC

Hi Tom,

I've known for a while our Java application commits way too often and the response from our developers is sadly unsurprising: changing this behavior will require too much work.

What I just noticed in log miner is that in a lot of instances the software begins a transaction with "set transaction read write" then instead of actually doing some DML it just issues a commit. I suppose the good thing is that the transaction is ended.

Is it safe to assume this behavior is just as bad as frequent commits when there is actually some change to commit? If it is just as bad, is there a less costly way to end the transaction? Getting the developers to only start a transaction when there is one to be had will probably be like pulling teeth.

Thanks,

Steve
Tom Kyte
April 22, 2013 - 7:29 pm UTC

committing a transaction that did no work is recognized and we don't actually do anything.

we recognize it is read only and just skip it.

Tony Tapper, September 30, 2013 - 4:13 pm UTC

This may be the answer to a problem that has been confusing us for weeks - 'snapshot too old' etc...
We're testing it now.

Thanks very much for providing this fantastic service.
I bought your Expert Oracle Database Architecture book today so hopefully I won't be mocked on this page in the future :>

Cheers, Tony

Commits causing slowness

NP, June 22, 2014 - 7:40 pm UTC

Hi Tom,

We have an application doing frequent large number of concurrent commits to the database. Sometimes we see slowness during such bouts of commits. A query which runs within a fraction of a second may take more than 30 seconds to finish at such times. This can continue up to a couple of minutes and the slowness vanishes after the surge in commits dies down. The ASH report indicates wait on log file sync event, commits amounting to tens of times higher than cpu count, during problem period. However this behavior is not consistent. There are some instances of similar high transaction activities when the database response time is not affected even though ASH shows exactly the same kind of waits with commits. The CPU usage is below limit although the run queue length crosses the available cores.

The recommendation from ASH report is to improve I/O performance. We use SAN disks and possible I/O tuning has already been implemented in that domain.
We are on 11g RAC database.

I have following questions/comments.

1. It would be great to know possible reasons on why the slowness is not encountered every time transaction volume and subsequent commits increase by similar amounts.

2. Would it help to start another instance of RAC, considering the I/O would still be happening with the same SAN?

3. Any hints to troubleshoot and nail down the issue would be much appreciated.

Commit monitoring

Steve, May 05, 2017 - 11:09 am UTC

Is there a simple way to get the number of commits in the instance in say the last 5 minutes via SQL? I would like to set up a warning in our monitoring suite if it climbes too high.

I can see this working with:

Statspack (would have to be deployed in our shop)
AWR tables (possible license issues)
Log Miner (also would have to be deployed)
Auditing tables (most likely spot I think but would still have to be set up)

So I'm wondering if there are any other sources of this Information.

Thanks for any help you can offer.
Chris Saxon
May 08, 2017 - 9:37 am UTC

You could take snapshots of the "user commits" stat and store these in a table:

create table t (
  x int 
);

select sid, value from v$sesstat s
join   v$statname n
on     n.statistic# = s.statistic#
where  name like 'user commit%'
and    sid = sys_context('USERENV', 'SID');

SID  VALUE  
106  30 

begin
  for i in 1 .. 10 loop
    insert into t values (1);
    commit;
  end loop;
end;
/

select sid, value from v$sesstat s
join   v$statname n
on     n.statistic# = s.statistic#
where  name like 'user commit%'
and    sid = sys_context('USERENV', 'SID');

SID  VALUE  
106  40   


Note the values are cumulative. And if a session disconnects its information is lost. But you could build something to tell you if the number of commits by a session between captures exceeds some threshold.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.