Hi Tom,
This is my first question to you and hope to receive a positive response :)
Straight to the Question:
I have a table named trail_log. This is a huge table and a daily growth of this table is 12GB Approx. Currently 76216160 rows. Important here to note is that this is online table in which 24*7 insert statement are coming from front end application. ONLY INSERTS come from front end application.
Now we have a archival logic where we need to delete data from this table older than 15 days: A manual JOB
Table StructureCREATE TABLE TRAIL_LOG
(
ORDER_NO NUMBER(9),
BS_NAME VARCHAR2(50 BYTE),
MESSAGE_SENDER VARCHAR2(25 BYTE),
MESSAGE_RECEIVER VARCHAR2(25 BYTE),
TRANSACTION_ID VARCHAR2(40 BYTE),
MSG_PART NUMBER(2),
MESSAGE VARCHAR2(4000 CHAR),
CRE_DATE_TIME DATE
)
Sample dataINSERT INTO trail_log
(order_no, bs_name, message_sender,
message_receiver, transaction_id, msg_part,
MESSAGE,
cre_date_time
)
VALUES (123456789, 'business_service_name', 'name_of_application',
'receive_name', '2E2BE998F84833C7FB70189C60AEF58D', 1,
'<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="
http://schemas.xmlsoap.org/soap/envelope/" ><soapenv:Body>...</>',
TO_DATE ('08/14/2016 01:48:54', 'MM/DD/YYYY HH24:MI:SS')
);
DELETE /*+ PARALLEL(t,4)*/ FROM trail_log t WHERE cre_date_time<TRUNC(sysdate)-15;
There is a index as well on this table:
CREATE INDEX INDX_AUDIT_LOG_DATE ON TRAIL_LOG (CRE_DATE_TIME);
As this is online table I can't take the CTAS approach Create table ... select * from.. Rename is also not an option due to online table.
>> When I run the above DELETE it takes 15 min approx to delete 21,28,170 and sometimes due to continues sessions from front end applications (INSERTS), some locking happens (not sure now) due to which database is chocked and a total chaos :(
So can you suggest me any approach which can reduce the time taken while DELETE. I know that partition can one of the way and that I am surely considering as a long term solution. But like anything for short term?
//regards
Vipin
Partitioning is *definitely* the way to go here. So consider that as soon as possible.
In the meantime, rather than doing a big reosurce-hungry delete on the table, perhaps do it more as a "trickle" so its less intrusive on your applications. Something like:
declare
d date;
loop
d := sysdate;
loop
delete /*+ index_asc(t) */ trail_log t
WHERE cre_date_time<TRUNC(sysdate)-15
and rownum < 10000;
exit when sql%notfound or sysdate - d > 60 / 86400;
end loop;
commit;
dbms_lock.sleep(30);
end loop;
end;
ie, delete batches of 10,000 rows for up to 1 minute, and then commit and sleep for 30 seconds. That can be left to run continuously. You can of course, adjust the numbers and times to suit what works best for you.
There is also the row archival option to consider
https://docs.oracle.com/database/121/VLDBG/GUID-5A76B6CE-C96D-49EE-9A89-0A2CB993A933.htm#VLDBG14154 but that would need greater investigation to see if it meets your need.