Satender Dalal, January 27, 2017 - 11:11 am UTC
Thanks a lot Chris.
But i want to ask if there are 20,000 inserts on a table then should we commit after every 300 inserts so that locking does not happen.?
January 27, 2017 - 5:06 pm UTC
Are you actually seeing locking, or are you just guessing that you will? As the comment below says, it's rare you'll see blocking on insert.
multiple commits
Praveen Ray, January 27, 2017 - 1:18 pm UTC
so that locking does not happen.? - are you /*+ append */ inserting? if not then it wont X-lock the table but SX which won't make wait other TXs to work on available records.
It is meaningless to put several commits for one transaction.
January 27, 2017 - 5:06 pm UTC
Indeed
Experiment
Chuck, January 27, 2017 - 2:45 pm UTC
Honestly 20k records isn't that much nowadays.
We have a process we run once a year that inserts 400k records into our most active table, which has many indexes and triggers on it. It runs as a single transaction during business hours. It is written as a simple INSERT INTO TABLE SELECT... without any hints. It doesn't cause any problems at all.
Test your process first without all the complications. Add complications only if you have to to solve a real problem.
January 27, 2017 - 5:07 pm UTC
Well said.
A reader, January 28, 2017 - 1:39 am UTC
Yes I agree 20k records are not much. But what if every session have 20k records and there are 10 sessions, in this case locking will happen right?
And all the sessions are inserting into same table.
What I'm following is I'm taking some 300 records in one cursor by limiting it and inserting into a table and deleting from old table, is this correct way?
January 30, 2017 - 3:53 pm UTC
in this case locking will happen right?
No! Unless you're inserting duplicate values for unique/primary keys that is. You might have contention. But that's a different issue. And committing more often won't really help here.
If the rows are all coming from another table, use the "select ... insert into" method the review below mentions.
Then test how this works. Measure it with 10 concurrent sessions. Ask yourself if it's "fast enough". Only start trying options if the answer to this is "no".
Re
GJ, January 29, 2017 - 3:35 am UTC
As Chris has said, commit only when you need to commit.
The fastest/cleanest/restartable way would be...
INSERT INTO DESTINATION_TABLE
SELECT * FROM TABLE_WITH_20K_rows;
And if 10 such sessions run this, it would lead to 10 transactions.
If on the other hand you are commiting per row then you would have 200K transactions.
January 30, 2017 - 3:53 pm UTC
Thanks and well said.