Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Paul.

Asked: July 01, 2001 - 10:35 pm UTC

Last updated: July 04, 2001 - 8:52 pm UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,

1. Once in a while, I need to delete large number of rows from a table. I am 100 percent sure those rows are to be removed(no need to rollback the operation). Since Oracle automatically logs the transaction on its rollback segment, it takes much longer time and sometimes not sufficient rollback segment error is raised. How do I tell Oracle to suspend the log ( no logging ) on rollback segment?

2. I use Oracle jdbc client to develop a connection pool. Yet I can only setup about 120 concurrent connections to Oracle DBMS before the limit is reached. Can you tell me how to increase the number, say up to 500?

Thank you!

Paul Huang


and Tom said...

1) You may not need to roll it back but we sure the heck do (power failure in the middle of your delete would mean a corrupt table when you restarted, constraint violation on the last row would mean a corrupt table, software failure would mean corrupt table, and so on)

If you want to get rid of ALL rows, truncate the table.

If you want to get rid of MOST all rows --

Create table temp unrecoverable as select THE ROWS YOU WANT TO KEEP;
truncate the table
insert /*+ APPEND */ into the table select * from temp;


You cannot disable logging on normal DML. Only special operations, like a direct path load, an initial create or rebuild and so on can be done in a non-logged fashion. With these operations, if we fail in the middle - we don't care -- just drop the affected object.


2) one would need to know the exact error message. I'll guess it is "max processes exceeded". Raise your processes init.ora parameter.


followup to comment one

well, I'm disappointed in the comment.

A database is designed to provide highly concurrent, safe, consistent access to lots of data. Data integrity is paramount. As I said -- if a delete failed 1/2 way through for ANY reason -- the table would be totally corrupt -- garbage, unusable.

I would look at my design if I had tables that I frequently needed to delete MOST rows from in one fell swoop. There might be ways -- such as physical table partitioning -- to make this very very fast. Given the information I had -- truncate and CTAS are the way to go.

We really don't do extra work for nothing. Operations that can safely be performend without logging are permitted to be performed without logging. Other operations -- operations that would damage the database in the event of a failure -- are not. Data integrity and protection is our first job.

followup to comment two

Excellent -- we can do that very fast. You have three cases:

some may want 90 days, some want 120 days, some even want 360 days

We can partition data by DATE and RETENTION. You would be adding partitions to one end of the table for new data every 90, 120, 360 days and dropping old partitions with old data at the end of 90, 120 and 360 days. The removal of data would be instantaneous -- you could even archive it easily.

Interested in more details??

followup to comment three

You are wrong. An INSERT is very very different then an UPDATE OR DELETE.

An insert is trivial to do in a non-logged fashion. We can easily add a new extent at the "end" of the table and put the new data in there. If the operation fails for any reason, that extent just never becomes part of the real table - no harm done. The bottom line is with the insert, we can insure that the new data doesn't get INTERMINGLED with the old data. We can load it and attach it to the table without harming anything.

An update or delete on the other hand is terribly intermixed with the EXISTING data. If a delete was 1/2 way through and the system crashed -- the table is now in a totally indertiminate state (eg: corrupt). It is because an update and delete work on data that is intermingled with other data that will not be changing that we cannot do this operation unlogged -- we NEED to be able to recover the operation else the table is corrupt after any failure.




Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Paul Huang, July 03, 2001 - 1:46 pm UTC

I am kind of disappointed to the answer to the question 1. Oracle should allow users to reorganize data efficiently when we need to get rid of some junks.

Tom made a correct guess on the question 2. It worked like a chime now. The limit is on the server side, not as I guessed on the client.

Paul Huang, July 04, 2001 - 12:32 am UTC

Well, Tom, not all tables are the same.
We have a billing system that downloads activities from our applications across the nation. We use a table to hold the original records from which we generate summary reports. Based on the agreement with our clients, we keep the original records on billing system for certain days -- some may want 90 days, some want 120 days, some even want 360 days. So each week, we need to purge millions of rows. Yet the table has no relationships to any other tables in the system. If it fails during the operation, just re-issue the delete statement again. No data integrity is violated. No transaction is to worry about. Because it's simply a stand alone table.

Insert

KP, July 04, 2001 - 12:49 am UTC

In the case of insert u said

quote

"like a
direct path load, an initial create or rebuild and so on can be done in a
non-logged fashion. With these operations, if we fail in the middle - we don't
care -- just drop the affected object."


In the case insert u can do it with non-logged fashion.
Why not the samething can be applied to delete. Even
if delete fails 1/2 way or after 99.99% undo it.
How it is to be done is to be viewed at architectural
level.

What i mean to say is what can be done for insert can be done delete and update.

Wheteher the database architecture
supports it or not is different issue.













I think Tom might have mentioned Direct load insert and nologging

Ravi Ammamuthu, July 04, 2001 - 8:52 pm UTC

There is no way of doing the nologging operations for conventional insert. It is as same as any DELETE or UPDATE operations. Only DIRECT-LOAD insert can be done with nologging. I think, TOM might have mentioned that.



Thanks for insights on NOLOGGING

Pavan Kumar, May 24, 2005 - 10:43 pm UTC

Thanks AskTom Team for providing the insights on NOLOGGING mode. I'll have to try all the situaltions before taking a concrete methodology for my problem.

A reader, October 25, 2006 - 1:20 pm UTC


More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.