Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Satender.

Asked: January 27, 2017 - 7:45 am UTC

Last updated: January 30, 2017 - 3:53 pm UTC

Version: 11

Viewed 1000+ times

You Asked

Hi team,

I have a simple question.

Let's say i have 20,000 txn for one session and i have multiple sessions say 10.

So is it good to mention commit frequency as 300-400 in code for better performance.

Please let me know as what i am following is i am maintaining commit frequency as 300.

and Chris said...

You should commit as soon as a transaction is complete and not before!

Committing before the transaction is complete breaks your business logic. And causes the DB to do more work.

Merging several transactions into one means you (may) lose a lot of work if anything goes wrong.

For more details on this, read:

https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1519405621318
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4951966319022

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Satender Dalal, January 27, 2017 - 11:11 am UTC

Thanks a lot Chris.

But i want to ask if there are 20,000 inserts on a table then should we commit after every 300 inserts so that locking does not happen.?
Chris Saxon
January 27, 2017 - 5:06 pm UTC

Are you actually seeing locking, or are you just guessing that you will? As the comment below says, it's rare you'll see blocking on insert.

multiple commits

Praveen Ray, January 27, 2017 - 1:18 pm UTC

so that locking does not happen.? - are you /*+ append */ inserting? if not then it wont X-lock the table but SX which won't make wait other TXs to work on available records.
It is meaningless to put several commits for one transaction.
Chris Saxon
January 27, 2017 - 5:06 pm UTC

Indeed

Experiment

Chuck, January 27, 2017 - 2:45 pm UTC

Honestly 20k records isn't that much nowadays.
We have a process we run once a year that inserts 400k records into our most active table, which has many indexes and triggers on it. It runs as a single transaction during business hours. It is written as a simple INSERT INTO TABLE SELECT... without any hints. It doesn't cause any problems at all.

Test your process first without all the complications. Add complications only if you have to to solve a real problem.

Chris Saxon
January 27, 2017 - 5:07 pm UTC

Well said.

A reader, January 28, 2017 - 1:39 am UTC

Yes I agree 20k records are not much. But what if every session have 20k records and there are 10 sessions, in this case locking will happen right?
And all the sessions are inserting into same table.
What I'm following is I'm taking some 300 records in one cursor by limiting it and inserting into a table and deleting from old table, is this correct way?
Chris Saxon
January 30, 2017 - 3:53 pm UTC

in this case locking will happen right?

No! Unless you're inserting duplicate values for unique/primary keys that is. You might have contention. But that's a different issue. And committing more often won't really help here.

If the rows are all coming from another table, use the "select ... insert into" method the review below mentions.

Then test how this works. Measure it with 10 concurrent sessions. Ask yourself if it's "fast enough". Only start trying options if the answer to this is "no".

Re

GJ, January 29, 2017 - 3:35 am UTC

As Chris has said, commit only when you need to commit.

The fastest/cleanest/restartable way would be...
INSERT INTO DESTINATION_TABLE
SELECT * FROM TABLE_WITH_20K_rows;

And if 10 such sessions run this, it would lead to 10 transactions.

If on the other hand you are commiting per row then you would have 200K transactions.

Chris Saxon
January 30, 2017 - 3:53 pm UTC

Thanks and well said.