Skip to Main Content
  • Questions
  • Performance issue in CLOB\BLOB data migration

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Rohan.

Asked: September 20, 2016 - 5:35 pm UTC

Last updated: September 27, 2016 - 12:08 am UTC

Version: 12c

Viewed 1000+ times

You Asked

(did not get any answer for https://asktom.oracle.com/pls/apex/f?p=100:24:0::NO::P24_ID:9531842300346462307 )
Hello Tom,

First of all, i would like you to thank you for your immense support on Database issues.It helps us a lot !!

Question :
My source Table has 3 millions of records and has CLOB column in it.
I am joining with another table and inserting into Target table.

Target table has BLOB column.So, i am using function to convert CLOB to BLOB.

Fetch Query has simple join on 2 tables.I have ran sql advisor and it is recommending it to use FULL TABLE SCAN.

The Migration took more than 2 hours with below configuration :
CPU’S – 8
Memory – 32G
Archive log mode off
Db_mulit_block_read_count=8

Same process on Client's QA environment took 24 hours with 5 million records.

Using below function in query for CLOB TO BLOB conversion :
-------------------------------------------------------------------------------
create or replace FUNCTION "CLOB_TO_BLOB" (P_CLOB CLOB) RETURN BLOB
AS
L_BLOB BLOB;
L_DEST_OFFSET INTEGER := 1;
L_SOURCE_OFFSET INTEGER := 1;
L_LANG_CONTEXT INTEGER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING INTEGER := DBMS_LOB.WARN_INCONVERTIBLE_CHAR;
BEGIN
IF P_CLOB IS NOT NULL
THEN
DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
DBMS_LOB.CONVERTTOBLOB
(
DEST_LOB =>L_BLOB,
SRC_CLOB =>P_CLOB,
AMOUNT =>DBMS_LOB.LOBMAXSIZE,
DEST_OFFSET =>L_DEST_OFFSET,
SRC_OFFSET =>L_SOURCE_OFFSET,
BLOB_CSID =>DBMS_LOB.DEFAULT_CSID,
LANG_CONTEXT=>L_LANG_CONTEXT,
WARNING =>L_WARNING
);
ELSE
L_BLOB:=NULL;
END IF;
RETURN L_BLOB;
END;
-------------------------------------------------------------------------------


Code snippet while inserting :

OPEN CUR_UNH;
LOOP
FETCH CUR_UNH BULK COLLECT INTO CUR_UNH_TAB LIMIT 10000;
FORALL I IN 1..CUR_UNH_TAB .COUNT
INSERT /*+ APPEND_VALUES NOLOGGING */
INTO UNH_TARGET VALUES USER_NOTIF_HISTORY_TAB (I);


---------------------------------------------------
********************************************************************************

SQL ID: 9ncjrmn2st6vm Plan Hash: 2576917505

SELECT USER_NOTIF_HISTORY_ID ID , AUM.CONSUMER_ID , AUM.NEW_ACCNT_ID ACCNT_ID
, CAST (TIME_SENT AS TIMESTAMP(0)) SENT_TIME , STATUS , PHONE ,
NOTIFICATION_TYPE , SUBJECT_LINE , CONTENT_VERSION , CLOB_TO_BLOB(CONTENT) ,
RETURN_STATUS , NOTIFICATION_ADDR , EVENT_TYPE , :B2 ORG_ID , CAST
(CREATED_TIME AS TIMESTAMP(0)) INSERT_TIME , :B1 INSERT_BY , :B1
LAST_UPD_BY , CAST (SYSDATE AS TIMESTAMP(0)) LAST_UPD_TIME, 1
REC_VERSION_NUM
FROM
USER_NOTIF_HISTORY UNH,ACCNT_USER_MAP AUM WHERE UNH.E3_USER_ID=
AUM.E3_USER_ID


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 289 3542.19 3818.63 58000856 80338689 99706146 2889072
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 289 3542.19 3818.63 58000856 80338689 99706146 2889072

Misses in library cache during parse: 0
Parsing user id: 80 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path read 57921934 0.22 461.58
db file sequential read 234 0.00 0.03
db file scattered read 3 0.00 0.00
********************************************************************************






SQL ID: 7csfprx9xbgg5 Plan Hash: 0

INSERT /*+ APPEND_VALUES NOLOGGING */ INTO E3EIP.E3_USER_NFN_HISTORY
VALUES
(:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 ,:B9 ,:B10 ,:B11 ,:B12 ,:B13 ,:B14 ,
:B15 ,:B16 ,:B17 ,:B18 ,:B19 ) LOG ERRORS INTO USER_NOTIF_HISTORY_EXCP
('INSERT') REJECT LIMIT UNLIMITED


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 290 1757.69 4605.65 26503 8901681 28826158 2899034
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 290 1757.69 4605.65 26503 8901681 28826158 2899034

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80 (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path write 2893028 0.33 2916.79
db file sequential read 24370 0.01 6.86
enq: TX - contention 4199 0.16 37.90
latch: cache buffers chains 585 0.00 0.01
Disk file operations I/O 5 0.00 0.00
local write wait 2515 0.16 9.30
buffer busy waits 1925 0.00 0.04
latch free 7 0.00 0.00
read by other session 64 0.04 0.06
enq: HW - contention 56 0.07 1.07
log file switch (private strand flush incomplete)
4 0.02 0.08
latch: shared pool 1 0.00 0.00
latch: enqueue hash chains 1 0.00 0.00
********************************************************************************


If i am replacing CLOB_TO_BLOB(CONTENT) to NULL in SELECT query , it is taking 3 mins to complete else taking 2 hours for 3 million records.

On Clients QA environment the same process is taking 27 hrs for 5 million records.

Please assist in tuning the process.

and Connor said...

There have been some performance issues with dml error logging when it comes to direct path loading.

Try repeating the experiment with a subset of the data, using just:

INSERT /*+ APPEND */ INTO E3EIP.E3_USER_NFN_HISTORY
SELECT USER_NOTIF_HISTORY_ID ID , AUM.CONSUMER_ID , AUM.NEW_ACCNT_ID ACCNT_ID
, CAST (TIME_SENT AS TIMESTAMP(0)) SENT_TIME , STATUS , PHONE ,
NOTIFICATION_TYPE , SUBJECT_LINE , CONTENT_VERSION , CLOB_TO_BLOB(CONTENT) ,
RETURN_STATUS , NOTIFICATION_ADDR , EVENT_TYPE , :B2 ORG_ID , CAST
(CREATED_TIME AS TIMESTAMP(0)) INSERT_TIME , :B1 INSERT_BY , :B1
LAST_UPD_BY , CAST (SYSDATE AS TIMESTAMP(0)) LAST_UPD_TIME, 1
REC_VERSION_NUM
FROM
USER_NOTIF_HISTORY UNH,ACCNT_USER_MAP AUM WHERE UNH.E3_USER_ID=
AUM.E3_USER_ID

and see what kind of performance results you get.

If it *is* faster, then we can explore mechanisms to do the work and not have to worry about 1 error breaking the entire process.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

CLOB to BLOB conversion - Not much useful solution

Rohan Mhatre, September 26, 2016 - 8:50 am UTC

Hello Chris,

I am aware that error log affects the performance overall.
But in my case it seems that CLOB to BLOB conversion is the main culprit.

As mentioned earlier, I replaced the CLOB to BLOB conversion with NULL and it takes 3 mins only while including the same takes more than 2 hours.

Although there are 2 million rows the table size is more than 100 GB due to the CLOB information in it.

is there any fastest way to convert CLOB to BLOB ?
Connor McDonald
September 27, 2016 - 12:08 am UTC

Check out dbms_parallel_execute.

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here