You might have hit a bug there.
I've replicated your issue, I was just loading the same few blobs over and over with a data file like this:
1|/tmp/blob1.jpg|U|
2|/tmp/blob2.jpg|U|
3|/tmp/blob3.jpg|U|
4|/tmp/blob4.jpg|U|
5|/tmp/blob0.jpg|U|
6|/tmp/blob1.jpg|U|
7|/tmp/blob2.jpg|U|
8|/tmp/blob3.jpg|U|
9|/tmp/blob4.jpg|U|
10|/tmp/blob0.jpg||
...
with a table as
create table vvs_msg
( id int,
content blob,
code_page varchar2(5)
);
LOAD DATA
INFILE '/tmp/load.dat'
BADFILE '/tmp/load.bad'
DISCARDFILE '/tmp/load.dsc'
TRUNCATE
INTO TABLE vvs_msg
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS
(
id,
content_filename FILLER CHAR(100),
content LOBFILE(content_filename) TERMINATED BY EOF,
code_page
)
and when I look at the sqlldr process, its memory size continues to grow and grow
[oracle@vbgeneric ~]$ cat /proc/5310/status | grep ^Vm
VmPeak: 191088 kB
VmSize: 191088 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 80736 kB
VmRSS: 80736 kB
VmData: 77840 kB
VmStk: 472 kB
VmExe: 1448 kB
VmLib: 70756 kB
VmPTE: 332 kB
VmPMD: 12 kB
VmSwap: 0 kB
[oracle@vbgeneric ~]$ cat /proc/5310/status | grep ^Vm
VmPeak: 191352 kB
VmSize: 191352 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 81000 kB
VmRSS: 81000 kB
VmData: 78104 kB
VmStk: 472 kB
VmExe: 1448 kB
VmLib: 70756 kB
VmPTE: 332 kB
VmPMD: 12 kB
VmSwap: 0 kB
[oracle@vbgeneric ~]$ cat /proc/5310/status | grep ^Vm
VmPeak: 191484 kB
VmSize: 191484 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 81000 kB
VmRSS: 81000 kB
VmData: 78236 kB
VmStk: 472 kB
VmExe: 1448 kB
VmLib: 70756 kB
VmPTE: 332 kB
VmPMD: 12 kB
VmSwap: 0 kB
[oracle@vbgeneric ~]$ cat /proc/5310/status | grep ^Vm
VmPeak: 201384 kB
VmSize: 201384 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 91032 kB
VmRSS: 91032 kB
VmData: 88136 kB
VmStk: 472 kB
VmExe: 1448 kB
VmLib: 70756 kB
VmPTE: 352 kB
VmPMD: 12 kB
VmSwap: 0 kB
[oracle@vbgeneric ~]$ cat /proc/5310/status | grep ^Vm
VmPeak: 202572 kB
VmSize: 202572 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 92088 kB
VmRSS: 92088 kB
VmData: 89324 kB
VmStk: 472 kB
VmExe: 1448 kB
VmLib: 70756 kB
VmPTE: 352 kB
VmPMD: 12 kB
VmSwap: 0 kB
I tried some variations (direct, different read/bind sizes etc) and none seemed to help significantly.
So get in touch with Support - you can link to the this question as supporting evidence. As a workaround, look at processing the files in smaller batches, since it seems proportional to the number of the blobs loaded.