Skip to Main Content
  • Questions
  • Retrieving Data from BLOB into a file.

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Ambarish.

Asked: November 07, 2002 - 4:25 pm UTC

Last updated: April 16, 2012 - 3:59 pm UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Hi Tom,

We have inserted the BLOB field into the database .The file contents have been properly inserted into the database.

I am not able to retrieve the BLOB into a file from the database.The requirement is to put the BLOB into a file.

Thanks a lot for your co-operation.

Regards,
Ambarish.

and Tom said...

Prior to Oracle9iR2 you will need to use Java, C, VB, some 3gl language.

In 9iR2 -- PLSQL can write binary files using UTL_FILE.

In 9iR1 and before, you will need to use Java or some other 3gl that has the ability to do this.

If you have my book "Expert one on one Oracle" -- i do have an example in there in Pro*C that writes BLOBs to files in an external procedure -- so it works like a stored procedure call

Rating

  (127 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Hope this helps

Mio, November 08, 2002 - 12:38 am UTC

Here's a little java client utility that I use for 8.1.7. Look at the usage string. query_file should contain a query that returns a single lob locator. By default it reads from Oracle and writes to lob_file. Pass -write to read from lob_file and write to Oracle. By default it assumes a clob, pass -blob for binary data.

// file LobUtils.java
import java.io.InputStream;
import java.io.OutputStream;
import java.io.IOException;
import java.io.FileInputStream;
import java.io.FileOutputStream;

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.Blob;
import oracle.sql.BLOB;
import java.sql.SQLException;

import java.sql.Clob;
import oracle.sql.CLOB;

import java.io.FileReader;
import java.io.BufferedReader;
import java.io.StringWriter;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.Writer;
import java.io.InputStreamReader;

public class LobUtils
{
final static int bBufLen = 4 * 8192;
String query;
String connectString;
String outFile;
Connection conn;

public LobUtils(String connectString, String query, String outFile) {
this.connectString = connectString;
this.query = query;
this.outFile = outFile;
this.conn = null;
}

public static void main(String[] args)
throws FileNotFoundException, IOException, SQLException {
if (args.length < 5) usage();
int ii = 0;
String connString = args[ii++];
String queryFile = null;
String outputFile = null;

boolean read = true;
boolean isBinary = false;

for (; ii < args.length; ii++) {
if (args[ii].equals("-write"))
read = false;

if (args[ii].equals("-blob"))
isBinary = true;

if (args[ii].equals("-qf") && ii < args.length - 1)
queryFile = args[++ii];

if (args[ii].equals("-lf") && ii < args.length - 1)
outputFile = args[++ii];

}

if (queryFile == null || outputFile == null) usage();

// all set
if (read) {
BufferedReader freader = new BufferedReader(new FileReader(queryFile));
StringWriter swriter = new StringWriter();
int bufLen = 1024;
char[] cbuf = new char[bufLen];
int length = -1;
while ((length = freader.read(cbuf, 0, bufLen)) != -1) {
swriter.write(cbuf, 0, length);
}
freader.close();
swriter.close();
String query = swriter.toString();

LobUtils lutils = new LobUtils(connString, query, outputFile);
if (isBinary) {
Blob blob = lutils.getBlob();
long wrote = lutils.writeBlobToFile(blob);
System.out.println("Wrote " + wrote + " bytes to file " + outputFile);
} else {
Clob clob = lutils.getClob();
long wrote = lutils.writeClobToFile(clob);
System.out.println("Wrote " + wrote + " bytes to file " + outputFile);
}
} else {
BufferedReader freader = new BufferedReader(new FileReader(queryFile));
StringWriter swriter = new StringWriter();
int bufLen = 1024;
char[] cbuf = new char[bufLen];
int length = -1;
while ((length = freader.read(cbuf, 0, bufLen)) != -1) {
swriter.write(cbuf, 0, length);
}
freader.close();
swriter.close();
String query = swriter.toString();

LobUtils lutils = new LobUtils(connString, query, outputFile);
Clob clob = lutils.getClob();
InputStream creader = new FileInputStream(outputFile);
long wrote = lutils.writeToOraClob(clob, creader);
System.out.println("Wrote " + wrote + " bytes from file " + outputFile);
}
}

public Clob getClob()
throws SQLException {
conn = ConnUtil.getConnection(connectString);
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
Clob clob = null;
if (rs.next()) {
clob = rs.getClob(1);
}
return clob;
}

public Blob getBlob()
throws SQLException {
conn = ConnUtil.getConnection(connectString);
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
Blob blob = null;
if (rs.next()) {
blob = rs.getBlob(1);
}
return blob;
}

public long writeClobToFile(Clob clob)
throws IOException, SQLException {
long wrote = 0;
BufferedWriter fwriter = new BufferedWriter(new FileWriter(outFile));
wrote = readFromClob(clob, fwriter);
fwriter.close();
conn.commit();
conn.close();
return wrote;
}

public long writeBlobToFile(Blob blob)
throws IOException, SQLException {
long wrote = 0;
OutputStream fwriter = new FileOutputStream(outFile);
wrote = readFromBlob(blob, fwriter);
fwriter.close();
conn.commit();
conn.close();
return wrote;
}

private static void usage() {
System.err.println("Usage: java LobUtils user/passwd@sid [-write] [-blob] -qf query_file -lf lob_file");
System.exit(1);
}

public static long writeToOraBlob(Blob blob, InputStream in)
throws SQLException, IOException {
BLOB oblob = (BLOB)blob;
OutputStream out = oblob.getBinaryOutputStream();
int length = -1;
long wrote = 0;
int chunkSize = oblob.getChunkSize();
byte[] buf = new byte[chunkSize];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
wrote += length;
}
out.close();
return wrote;
}

public long writeToOraClob(Clob clob, InputStream in)
throws SQLException, IOException {
CLOB oclob = (CLOB)clob;
OutputStream out = oclob.getAsciiOutputStream();
int length = -1;
long wrote = 0;
int chunkSize = oclob.getChunkSize();
byte[] buf = new byte[chunkSize];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
wrote += length;
}
out.close();
conn.commit();
return wrote;
}

public static long readFromBlob(Blob blob, OutputStream out)
throws SQLException, IOException {
InputStream in = blob.getBinaryStream();
int length = -1;
long read = 0;
byte[] buf = new byte[bBufLen];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
read += length;
}
in.close();
return read;
}

public static long readFromClob(Clob clob, Writer out)
throws SQLException, IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(clob.getAsciiStream()));
int length = -1;
long read = 0;
char[] buf = new char[bBufLen];
while ((length = in.read(buf, 0, bBufLen)) != -1) {
out.write(buf, 0, length);
read += length;
}
in.close();
return read;
}
}


Retrieving Data from BLOB into a file

Ambarish Ganguly, November 08, 2002 - 8:22 am UTC

Isnt it a bit surprising that using Oracle packages , I can successfully write to a BLOB from a file, whereas we cannot read the BLOB into a file ?
I am using VB, thereby wanted to do all processing on the server side , hence tried to have a solution .

Tom Kyte
November 08, 2002 - 9:06 am UTC

9iR2 -- you can -- it is easy.

8i -- you can, java can do this easily if you like.


Retrieving Data from BLOB into a file... using UTL_FILE

Fred Howell, November 11, 2002 - 11:51 am UTC

I am currently searching through the Oracle documentation for UTL_FILE looking for a proc that will allow me to write BLOBs to a file. Unfortuately i don't see a proc that will work. The BLOBs we are working with are scanned images not text. Is there a UTL_FILE.PUT_BLOB ( that i tried - what about an equivalent or alternative). I have also considered the Java alternative but would prefer a PL/SQL solution as that is where the group expertise is for support and maintenance. Any assistance will be greatly appreciated. thanks, Fred

Tom Kyte
November 11, 2002 - 6:31 pm UTC

Until 9iR2 -- as stated -- java or c or some other 3gl is the only solution.

do you have an example for 9iR2?

gs, February 10, 2003 - 3:38 pm UTC

Tom,

Do you by any chance have an example on how to get this done in 9iR2?

Thanks

Tom Kyte
February 11, 2003 - 7:48 am UTC

it is just utl_file.put_raw. you'll have a blob, you'll dbms_lob.read or dbms_lob.substr or whatever part of the blob, and utl_file.put_raw it.

How do I get around the 32K limit, if using utl_file?

gs., February 11, 2003 - 5:53 pm UTC

Thanks Tom for you response.

This is what i was trying and was getting the following error, if the length exceeds 32K.

declare
*
ERROR at line 1:
ORA-29285: file write error
ORA-06512: at "SYS.UTL_FILE", line 18
ORA-06512: at "SYS.UTL_FILE", line 375
ORA-06512: at "SYS.UTL_FILE", line 990
ORA-06512: at line 35

declare
i1 blob;
i2 number;
amt number := 10000;
len number;
my_vr raw(10000);
l_output utl_file.file_type;
p_dir varchar2(30) default 'TEST_DIR';
p_file varchar2(30) default 'gs_test1';
begin
-- get the blob locator
l_output := utl_file.fopen(p_dir, p_file, 'w', 32760);
for l_cur in (SELECT record#, xsl mylob FROM docxsl where rownum = 1 order by record#)
loop
len := DBMS_LOB.GETLENGTH(l_cur.mylob);
i1 := l_cur.mylob ;
dbms_output.put_line('Length of the Column : ' || to_char(len));
i2 := 1;
if len < 10000 then
DBMS_LOB.READ(i1,len,i2,my_vr);
utl_file.put(l_output, my_vr );
else
DBMS_LOB.READ(i1,amt,i2,my_vr);
utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
end if;

i2 := i2 + amt;
while (i2 < len) loop -- loop till entire data is fetched
dbms_output.put_line('i2 : ' || to_char(i2));
DBMS_LOB.READ(i1,amt,i2,my_vr);
utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
utl_file.fflush(l_output);
i2 := i2 + amt ;
end loop;
utl_file.fclose(l_output);
end loop;
end ;
/

Note: I combine the example from your site and metalink.

I have XSL files stored as BLOB in the DB. I wanted to retrieve them, the way was they were stored. I don't want to put a "new line" after every 32K. Can I do this using PL/SQL?

--I thought it is related to the question asked, hence I am not submitting a new one.

Thanks very much.

Tom Kyte
February 12, 2003 - 8:12 am UTC

If they are XSL why are they blobs?

but in any case, utl file has a maximum of 32k -- an unavoidable, no workaround, hard limit.

No workaround!!!!

Steve, March 17, 2003 - 11:00 am UTC

I trying to write a dbf file straight out of the database. Right after the header I then start to output the rows by looping through them thus:

for i in 1..vRow.count loop

utl_file.put_raw(v_outputfile,vRow(i),TRUE);

end loop;

Now, this is fine for a small table but for anything bigger I get the write file error - ora-29285. Is there really no way around this? Surely there must be!

Tom Kyte
March 17, 2003 - 11:33 am UTC

got an example, use all_objects and see if it reproduces -- if so, post code here and I'll look at it.

You'll recognise most of this!

Steve, March 17, 2003 - 11:59 am UTC

The code is an extended version of your dbf file extractor and it also uses your str2tbl function!  Not complete yet as it can't handle dates properly or decimals, but it demonstrates the problem:

CREATE OR REPLACE package dbase_pkg
as

    -- procedure to a load a table with records
    -- from a DBASE file.
    --
    -- Uses a BFILE to read binary data and dbms_sql
    -- to dynamically insert into any table you
    -- have insert on.
    --
    -- p_dir is the name of an ORACLE Directory Object
    --       that was created via the CREATE DIRECTORY
    --       command
    --
    -- p_file is the name of a file in that directory
    --        will be the name of the DBASE file
    --
    -- p_tname is the name of the table to load from
    --
    -- p_cnames is an optional list of comma separated
    --          column names.  If not supplied, this pkg
    --          assumes the column names in the DBASE file
    --          are the same as the column names in the
    --          table
    --
    -- p_show boolean that if TRUE will cause us to just
    --        PRINT (and not insert) what we find in the
    --        DBASE files (not the data, just the info
    --        from the dbase headers....)

    procedure load_Table( p_dir    in varchar2,
                          p_file   in varchar2,
                          p_tname  in varchar2,
                          p_cnames in varchar2 default NULL,
                          p_show   in BOOLEAN default FALSE);
                          
    -- procedure to dump a file to disk in DBASE file format.
    --
    -- in addition to the above, p_where_clause is the filter 
    -- which if null will dump the entire table.  
                          
    procedure dump_table (p_dir          in varchar2,
                            p_file         in varchar2,
                          p_tname        in varchar2,
                          p_cnames       in varchar2 default NULL,
                          p_where_clause in varchar2 default ' 1=1 ');
end;
/
CREATE OR REPLACE package body dbase_pkg
as

-- Might have to change depending on platform!!!
-- Controls the byte order of binary integers read in
-- from the dbase file
BIG_ENDIAN      constant boolean default TRUE;

type dbf_header is RECORD
(
    version    varchar2(25), -- dBASE version number
    year       int,          -- 1 byte int year, add to 1900
    month      int,          -- 1 byte month
    day        int,             -- 1 byte day
    no_records VARCHAR2(8),             -- number of records in file,
                             -- 4 byte int
    hdr_len    VARCHAR2(4),             -- length of header, 2 byte int
    rec_len    VARCHAR2(4),             -- number of bytes in record,
                             -- 2 byte int
    no_fields  int           -- number of fields
);


type field_descriptor is RECORD
(
    name     varchar2(11),
    fname    varchar2(30),
    type     char(1),
    length   int,   -- 1 byte length
    decimals int    -- 1 byte scale
);

type field_descriptor_array
is table of
field_descriptor index by binary_integer;


type rowArray
is table of
varchar2(4000) index by binary_integer;


g_cursor binary_integer default dbms_sql.open_cursor;


function ite( tf in boolean, yes in varchar2, no in varchar2 )
        return varchar2
is
begin
    if ( tf ) then
       return yes;
    else
       return no;
    end if;
end ite;




-- Function to convert a binary unsigned integer
-- into a PLSQL number

function to_int( p_data in varchar2 ) return number
is
    l_number number default 0;
    l_bytes  number default length(p_data);
begin
    if (big_endian)
    then
        for i in 1 .. l_bytes loop
            l_number := l_number +
                              ascii(substr(p_data,i,1)) *
                                           power(2,8*(i-1));
        end loop;
    else
        for i in 1 .. l_bytes loop
            l_number := l_number +
                         ascii(substr(p_data,l_bytes-i+1,1)) *
                         power(2,8*(i-1));
        end loop;
    end if;

    return l_number;
end;


-- Routine to parse the DBASE header record, can get
-- all of the details of the contents of a dbase file from
-- this header

procedure get_header
(p_bfile        in bfile,
 p_bfile_offset in out NUMBER,
 p_hdr          in out dbf_header,
 p_flds         in out field_descriptor_array )
is
    l_data            varchar2(100);
    l_hdr_size        number default 32;
    l_field_desc_size number default 32;
    l_flds            field_descriptor_array;
begin
    p_flds := l_flds;

    l_data := utl_raw.cast_to_varchar2(
                       dbms_lob.substr( p_bfile,
                                        l_hdr_size,
                                        p_bfile_offset ) );
    p_bfile_offset := p_bfile_offset + l_hdr_size;

    p_hdr.version    := ascii( substr( l_data, 1, 1 ) );
    p_hdr.year       := 1900 + ascii( substr( l_data, 2, 1 ) );
    p_hdr.month      := ascii( substr( l_data, 3, 1 ) );
    p_hdr.day        := ascii( substr( l_data, 4, 1 ) );
    p_hdr.no_records := to_int( substr( l_data,  5, 4 ) );
    p_hdr.hdr_len    := to_int( substr( l_data,  9, 2 ) );
    p_hdr.rec_len    := to_int( substr( l_data, 11, 2 ) );
    p_hdr.no_fields  := trunc( (p_hdr.hdr_len - l_hdr_size)/
                                           l_field_desc_size );


    for i in 1 .. p_hdr.no_fields
    loop
        l_data := utl_raw.cast_to_varchar2(
                         dbms_lob.substr( p_bfile,
                                          l_field_desc_size,
                                          p_bfile_offset ));
        p_bfile_offset := p_bfile_offset + l_field_desc_size;

        p_flds(i).name := rtrim(substr(l_data,1,11),chr(0));
        p_flds(i).type := substr( l_data, 12, 1 );
        p_flds(i).length  := ascii( substr( l_data, 17, 1 ) );
        p_flds(i).decimals := ascii(substr(l_data,18,1) );
    end loop;

    p_bfile_offset := p_bfile_offset +
                          mod( p_hdr.hdr_len - l_hdr_size,
                               l_field_desc_size );
end;


function build_insert
( p_tname in varchar2,
  p_cnames in varchar2,
  p_flds in field_descriptor_array ) return varchar2
is
    l_insert_statement long;
begin
    l_insert_statement := 'insert into ' || p_tname || '(';
    if ( p_cnames is NOT NULL )
    then
        l_insert_statement := l_insert_statement ||
                              p_cnames || ') values (';
    else
        for i in 1 .. p_flds.count
        loop
            if ( i <> 1 )
            then
               l_insert_statement := l_insert_statement||',';
            end if;
            l_insert_statement := l_insert_statement ||
                            '"'||  p_flds(i).name || '"';
        end loop;
        l_insert_statement := l_insert_statement ||
                                           ') values (';
    end if;
    for i in 1 .. p_flds.count
    loop
        if ( i <> 1 )
        then
           l_insert_statement := l_insert_statement || ',';
        end if;
        if ( p_flds(i).type = 'D' )
        then

            l_insert_statement := l_insert_statement ||
                     'to_date(:bv' || i || ',''yyyymmdd'' )';
        else
            l_insert_statement := l_insert_statement ||
                                                ':bv' || i;
        end if;
    end loop;
    l_insert_statement := l_insert_statement || ')';

    return l_insert_statement;
end;


function get_row
( p_bfile in bfile,
  p_bfile_offset in out number,
  p_hdr in dbf_header,
  p_flds in field_descriptor_array ) return rowArray
is
    l_data     varchar2(4000);
    l_row   rowArray;
    l_n     number default 2;
begin
    l_data := utl_raw.cast_to_varchar2(
                   dbms_lob.substr( p_bfile,
                                    p_hdr.rec_len,
                                    p_bfile_offset ) );
    p_bfile_offset := p_bfile_offset + p_hdr.rec_len;

    l_row(0) := substr( l_data, 1, 1 );

    for i in 1 .. p_hdr.no_fields loop
        l_row(i) := rtrim(ltrim(substr( l_data,
                                        l_n,
                                        p_flds(i).length ) ));
        if ( p_flds(i).type = 'F' and l_row(i) = '.' )
        then
            l_row(i) := NULL;
        end if;
        l_n := l_n + p_flds(i).length;
    end loop;
    return l_row;
end get_row;


procedure show( p_hdr    in dbf_header,
                p_flds   in field_descriptor_array,
                p_tname  in varchar2,
                p_cnames in varchar2,
                p_bfile  in bfile )
is
    l_sep varchar2(1) default ',';

    procedure p(p_str in varchar2)
    is
        l_str long default p_str;
    begin
        while( l_str is not null )
        loop
            dbms_output.put_line( substr(l_str,1,250) );
            l_str := substr( l_str, 251 );
        end loop;
    end;
begin
    p( 'Sizeof DBASE File: ' || dbms_lob.getlength(p_bfile) );

    p( 'DBASE Header Information: ' );
    p( chr(9)||'Version = ' || p_hdr.version );
    p( chr(9)||'Year    = ' || p_hdr.year   );
    p( chr(9)||'Month   = ' || p_hdr.month   );
    p( chr(9)||'Day     = ' || p_hdr.day   );
    p( chr(9)||'#Recs   = ' || p_hdr.no_records);
    p( chr(9)||'Hdr Len = ' || p_hdr.hdr_len  );
    p( chr(9)||'Rec Len = ' || p_hdr.rec_len  );
    p( chr(9)||'#Fields = ' || p_hdr.no_fields );

    p( chr(10)||'Data Fields:' );
    for i in 1 .. p_hdr.no_fields
    loop
        p( 'Field(' || i || ') '
             || 'Name = "' || p_flds(i).name || '", '
             || 'Type = ' || p_flds(i).Type || ', '
             || 'Len  = ' || p_flds(i).length || ', '
             || 'Scale= ' || p_flds(i).decimals );
    end loop;

    p( chr(10) || 'Insert We would use:' );
    p( build_insert( p_tname, p_cnames, p_flds ) );

    p( chr(10) || 'Table that could be created to hold data:');
    p( 'create table ' || p_tname );
    p( '(' );

    for i in 1 .. p_hdr.no_fields
    loop
        if ( i = p_hdr.no_fields ) then l_sep := ')'; end if;
        dbms_output.put
        ( chr(9) || '"' || p_flds(i).name || '"   ');

        if ( p_flds(i).type = 'D' ) then
            p( 'date' || l_sep );
        elsif ( p_flds(i).type = 'F' ) then
            p( 'float' || l_sep );
        elsif ( p_flds(i).type = 'N' ) then
            if ( p_flds(i).decimals > 0 )
            then
                p( 'number('||p_flds(i).length||','||
                              p_flds(i).decimals || ')' ||
                              l_sep );
            else
                p( 'number('||p_flds(i).length||')'||l_sep );
            end if;
        else
            p( 'varchar2(' || p_flds(i).length || ')'||l_sep);
        end if;
    end loop;
    p( '/' );
end;


procedure load_Table( p_dir in varchar2,
                      p_file in varchar2,
                      p_tname in varchar2,
                      p_cnames in varchar2 default NULL,
                      p_show in boolean default FALSE )
is
    l_bfile      bfile;
    l_offset  number default 1;
    l_hdr     dbf_header;
    l_flds    field_descriptor_array;
    l_row      rowArray;
begin
    l_bfile := bfilename( p_dir, p_file );
    dbms_lob.fileopen( l_bfile );

    get_header( l_bfile, l_offset, l_hdr, l_flds );

    if ( p_show )
    then
        show( l_hdr, l_flds, p_tname, p_cnames, l_bfile );
    else
        dbms_sql.parse( g_cursor,
                        build_insert(p_tname,p_cnames,l_flds),
                        dbms_sql.native );

        for i in 1 .. l_hdr.no_records loop
            l_row := get_row( l_bfile,
                              l_offset,
                              l_hdr,
                              l_flds );

            if ( l_row(0) <> '*' ) -- deleted record
            then
                for i in 1..l_hdr.no_fields loop
                    dbms_sql.bind_variable( g_cursor,
                                            ':bv'||i,
                                            l_row(i),
                                            4000 );
                end loop;
                if ( dbms_sql.execute( g_cursor ) <> 1 )
                then
                    raise_application_error( -20001,
                                 'Insert failed ' || sqlerrm );
                end if;
            end if;
        end loop;
    end if;

    dbms_lob.fileclose( l_bfile );
exception
    when others then
        if ( dbms_lob.isopen( l_bfile ) > 0 ) then
            dbms_lob.fileclose( l_bfile );
        end if;
        RAISE;
end;

procedure put_header (p_tname        in varchar2,
                        p_cnames       in varchar2 DEFAULT NULL,
                      l_hdr          in out dbf_header,
                      vFlds          in out field_descriptor_array)
                      

                      
is

v_value_list  strTableType;

vCursor   varchar2(2000);
type rc IS ref cursor;
col_cur rc;
i  INTEGER:=0;

begin

IF p_cnames IS NOT NULL THEN
    v_value_list:=str2tbl(UPPER(p_cnames));
    vCursor:='select substr(column_name,1,11), 
                     case data_type 
                     when ''DATE'' then ''D''
                     when ''NUMBER'' then ''N''
                     else ''C'' end ,
                     case data_type
                     when ''NUMBER'' then NVL(data_precision,22)
                     when ''DATE'' then 8
                     else data_length end,
                     case data_type
                     when ''NUMBER'' then data_scale
                     end ,
                     column_name   from all_tab_cols 
                where column_name IN (select * from TABLE (cast(str2tbl(UPPER('''||p_cnames||''')) as strTableType)))
                and   table_name='''||upper(p_tname)||'''
                order by column_id';

else

    vCursor:='select SUBSTR(column_name,1,11), 
                     case data_type 
                     when ''DATE'' then ''D''
                     when ''NUMBER'' then ''N''
                     else ''C'' end ,
                     case data_type
                     when ''NUMBER'' then NVL(data_precision,22)
                     when ''DATE'' then 8
                     else data_length end,
                     case data_type
                     when ''NUMBER'' then data_scale
                     end ,
                     column_name
              from all_tab_cols
              where table_name='''||upper(p_tname)||'''
              order by column_id';
END IF;

open col_cur for vCursor;

loop
    i:=i+1;
    fetch col_cur into vFlds(i).name,vFlds(i).type,vFlds(i).length,vFlds(i).decimals,vFlds(i).fname;
    exit when col_cur%notfound;
    
end loop;

close col_cur;

l_hdr.version    :='03';
l_hdr.year       :=to_number(to_char(sysdate,'yyyy'))-1900;
l_hdr.month      :=to_number(to_char(sysdate,'mm'));
l_hdr.day        :=to_number(to_char(sysdate,'dd'));
l_hdr.rec_len    :=1; -- to be set later 
l_hdr.no_fields  :=vFlds.COUNT;
l_hdr.hdr_len    :=to_char((l_hdr.no_fields*32)+33,'FM000x');


end;


procedure put_rows (p_tname IN  varchar2,
                      p_where_clause in varchar2 default '1=1 ',
                    vRow in out rowarray,
                    vFlds in field_descriptor_array)

is
type rc is ref cursor;

cur rc;

i  integer:=0;
vSelectList  VARCHAR2(32767);

v_cur VARCHAR2(32767);

begin


for l in 1..vFlds.count loop

    vSelectList:=vSelectList||ite(l!=1,'||','')||'utl_raw.cast_to_raw(rpad(NVL('|| case when vFlds(l).type='N' then 'to_char(' end ||vFlds(l).fname||case when vFlds(l).type='N' then ')' end ||','' ''),'||vFlds(l).length||','' ''))';

end loop;

v_cur:='select '||vSelectList||' from '||p_tname||' where '||p_where_clause;

open cur for v_cur;

loop

    i:=i+1;
    fetch cur into vRow(i);
    exit when cur%notfound;
    
end loop;

close cur;

end;


procedure dump_table (p_dir          in varchar2,
                        p_file         in varchar2,
                      p_tname        in varchar2,
                      p_cnames       in varchar2 default NULL,
                      p_where_clause in varchar2 default ' 1=1 ')
                      
    

is
l_hdr dbf_header;
vFlds field_descriptor_array;
vRow  rowarray;
v_cnames  VARCHAR2(4000);
v_outputfile UTL_FILE.FILE_TYPE;

vCount int;

vStartTime  DATE;
vEndTime    DATE;
begin

    vStartTime:=sysdate;

    put_header(p_tname,p_cnames,l_hdr,vFlds);
    
    
    put_rows(p_tname,p_where_clause,vRow,vFlds);
    
    v_outputfile := utl_file.fopen(p_dir,p_file,'w',32767);
    
    for i in 1..vFlds.count loop
    
        l_hdr.rec_len:=l_hdr.rec_len+vFlds(i).length;
        
    end loop;
    
    l_hdr.rec_len    :=to_char(to_number(l_hdr.rec_len),'FM000x');
    
    l_hdr.rec_len   :=substr(l_hdr.rec_len,-2)||
                       substr(l_hdr.rec_len,1,2);
                      
    l_hdr.no_records :=to_char(vRow.count,'FM0000000x'); 
    
    l_hdr.no_records:=substr(l_hdr.no_records,-2)||
                       substr(l_hdr.no_records,5,2)||
                      substr(l_hdr.no_records,3,2)||
                      substr(l_hdr.no_records,1,2);
                      
    
    l_hdr.hdr_len:=substr(l_hdr.hdr_len,-2)||
                       substr(l_hdr.hdr_len,1,2);


    utl_file.put_raw(v_outputFile,
              rpad(l_hdr.version||to_char(l_hdr.year,'FM0x')||to_char(l_hdr.month,'FM0x')||
              to_char(l_hdr.day,'FM0x')||l_hdr.no_records||l_hdr.hdr_len||
              l_hdr.rec_len,64,'0'));
              
    for i in 1..vFlds.count loop
    
        utl_file.put_raw(v_outputFile,utl_raw.cast_to_raw(vFlds(i).name)||replace(rpad('00',12-length(vFlds(i).name),'#'),'#','00')||
                                      utl_raw.cast_to_raw(vFlds(i).type)||'00000000'||
                                      to_char(vFlds(i).length,'FM0x')||'000000000000000000000000000000');
    end loop;
    
    -- terminator for the field names 
    utl_file.put_raw(v_outputFile,'0D');
    
    
    for i in 1..vRow.count loop
    
            
        utl_file.put_raw(v_outputfile,'20'||vRow(i),TRUE);
        
            end loop;
    
    if utl_file.IS_OPEN(v_outputFile ) then
        UTL_FILE.FCLOSE(v_outputFile);
    end if;
    
    vEndTime:=sysdate;
    
    dbms_output.put_line('Started - '||to_char(vStartTime,'HH24:MI'));
    dbms_output.put_line('Finished - '||to_char(vEndTime,'HH24:mi'));
    dbms_output.put_line('Elapsed - '||to_char((vEndTime-vStartTime),'hh24:mi'));
exception
when others then
    utl_file.fclose(v_outputFile);
    raise;
    
end;

end;
/

SQL>  exec dbase_pkg.dump_table('c:\dbf','test.dbf','all_objects','owner,object_name,subobject_name,
object_id')
BEGIN dbase_pkg.dump_table('c:\dbf','test.dbf','all_objects','owner,object_name,subobject_name,objec

*
ERROR at line 1:
ORA-29285: file write error
ORA-06512: at "STEVE.DBASE_PKG", line 564
ORA-06512: at line 1


SQL> 

But if I limit the number of rows:

SQL> exec dbase_pkg.dump_table('c:\dbf','test.dbf','all_objects','owner,object_name,subobject_name,o
bject_id','rownum<100')

PL/SQL procedure successfully completed.

SQL> 
 

Tom Kyte
March 17, 2003 - 4:09 pm UTC

A bit hokey but this seems to work around the issue - which seems to be a line buffering issue:

551 for i in 1..vRow.count
552 loop
553 utl_file.put_raw(v_outputfile,'20'||vRow(i),TRUE);
554 if ( mod(i,200) = 0 )
555 then
556 utl_file.fclose( v_outputFile );
557 v_outputfile := utl_file.fopen(p_dir,p_file,'a',32767);
558 end if;
559 end loop;


Suggestion:

o put together the smallest test case possible you can
o submit that to support
o use the workaround for the time being...




steve, March 17, 2003 - 4:14 pm UTC

Thanks Tom - I wondered whether closing and re-opening the file would work but thought that I must be missing something. I'll prepare a test case for support and post here if they come up with a solution.

Once again, thanks

Steve

Did the dbf conversion work?

dharma, May 19, 2003 - 5:08 am UTC

I am trying to convert an oracle table to dbf format. and am not able to do that. Did steves code work? Just curious, I dont want to reinvent the wheel!

Thanks

Tom Kyte
May 19, 2003 - 7:53 am UTC

it is right there for you to try -- go for it.

Yes it did

Steve, May 19, 2003 - 2:24 pm UTC

although you're still constrained by the 32k limit - Oracle support suggested making sure that I had a utl_file.new_line every so often but his, of course, added a couple of unwanted bytes to the file - ok for an image but not for dbf. So, in the end instead of using utl_file.put_raw, I had to use a small java proc to output the file - it works a treat but didn't need 9iR2 to work!!

I should add.....

steve, May 20, 2003 - 4:57 am UTC

that Oracle say that the 32k limit is fixed in 10i but that there is no backport for it.

It Worked!!

dharma, May 21, 2003 - 6:46 pm UTC

The first time I tried to do it on my own and messed it up, and then the copy paste worked fine :)
Thanks the conversion to dbf worked perfect. added the code for date and the workaround for utl_file.

Got a a good understanding of utl_raw alsow :)

One question that bothers me is the header length is 32 byte, but why do we have to allocate 64 bytes. (assumed because its rpad'ed with 64 0's; Is it because 32 bytes in hex is 64 in ascii?

Thanks to steve too :)

-dharma

Blob to File (9.2)

Ray White, August 06, 2003 - 3:22 pm UTC

Hi, I am using an above snippet (modified)

declare
vblob blob;
vstart number:=1;
bytelen number := 32000;
len number;
my_vr raw(32000);
l_output utl_file.file_type;
p_dir varchar2(30) default 'f:\gol\ora_vscan';
p_file varchar2(30) default 'CMsitegarden.pdf';
begin
-- get the blob locator
l_output := utl_file.fopen(p_dir, p_file, 'w', 32760);
for l_cur in (SELECT DOCUMENT_CONTENT mylob FROM gol_attached_document where SUPP_DOCUMENT_NUMBER = 4 and rownum =1)
loop
len := DBMS_LOB.GETLENGTH(l_cur.mylob);
vblob := l_cur.mylob ;
dbms_output.put_line('Length of the Column : ' || to_char(len));
vstart := 1;
while (vstart < len) loop -- loop till entire data is fetched
dbms_output.put_line('vstart : ' || to_char(vstart));
DBMS_LOB.READ(vblob,bytelen,vstart,my_vr);
utl_file.put_raw(l_output,my_vr);
utl_file.fflush(l_output);
vstart := vstart + bytelen ;
end loop;
utl_file.fclose(l_output);
end loop;
end ;

This outputs my file fine. But it will not open in the software associated.

IE a .doc word cannot open
a .pdf acrobat cannot open
the file sizes match the source.

Can you tell me what I am doing wrong.

for .txt files in the blob do I have to do anything different?

tia
Ray

Blob to File

Ray, August 06, 2003 - 3:42 pm UTC

More info on the above.

I looked at the hex of both the source and the after upload and write. Every time the source had a 0A , the resulting file had a 0D0A. the resulting file also had a 0D0A at the end.




Tom Kyte
August 09, 2003 - 12:07 pm UTC

please contact support for this one -- looks like a windowism -- \r\n is the DOS carriage return/linefeed end of line marker. On unix'es this doesn't happen and thats all I got.

Java program posted by "Mio from US" is really useful. Thanks Mio & Tom.

Sami, December 31, 2003 - 9:45 am UTC


getAsciiStream vs. getCharacterStream

Sanjay Jha, January 29, 2004 - 11:05 am UTC

Tom,
We are facing something very strange here. We have two identical databases one on Windows 2000 and the other on Sun OS. Both are having utf8 characterset. Our developers coded an application using getCharacterStream class to access a CLOB column and it worked fine on the database running on Windows 2000, but it did not show the right data when run against the database running on SunOS, rather showed some garbage.
When we changed to getAsciiStream class, it works fine for both the databases. However, we are concerned for the multilingual data (like Chinese etc.) which will be stored eventually in this column and retrieving that like Ascii will not give us the right result ( since Chinese will use multi-byte storage for characters unlike Ascii which uses single byte).
Could you point out where are we missing something? It is surprising that operating system has a bearing on something within oracle database, I still do not beleve that to be true. What is your opinion and solution if any?

Regards.

Tom Kyte
January 29, 2004 - 1:26 pm UTC

what was the NLS_LANG on the clients

We are running Weblogic as application server

Sanjay Jha, January 30, 2004 - 9:53 am UTC

Tom,
We are running the application through BEA weblogic server(on Windows 2000). We just changed the database connection string for either database while keeping the database "client" (weblogic server) constant. The results were different though, as mentioned(i.e. garbage comes when we connect to database running on Sun OS). We tried setting NLS_LANG as Windows environmental parameter to utf8 and also we tried by setting nls_lang to utf8 in the weblogic startup parameter. No difference in the outcome.
Pleaae help.

Regards.

Tom Kyte
January 30, 2004 - 9:57 am UTC

should not happen -- all things constant -- same DB character sets (please verify that -- select * from nls_database_parameters on both and verify)

NLS settings are identical...

Sanjay Jha, January 30, 2004 - 2:29 pm UTC

Tom,
I have just compared the two, they are identical.
One more piece of information. I had set nls_lang=.utf8 and then exported the data from database on Windows(characterset utf8) and then imported into the schema in database running on Sun OS having characterset utf8. Will it corrupt the data for LOB columns if it is more than 4000 bytes, since it will be stored outside the database? Does that have to do anything with the problem? However, problem is only when we want to access it using getCharacterStream and not when using getAsciiStream class.
I know it might be little confusing but there is definitely something to do with character encoding which behaves differently for data being accessed from two different databases, which are identical but running on different OS.
Any help will be highly appreciated, Tom.

Regards.


tester@TEST_Win8i> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for 32-bit Windows: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

tester@TEST_Win8i> select * from nls_database_parameters;

PARAMETER VALUE
------------------------------ --------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_NCHAR_CHARACTERSET UTF8
NLS_RDBMS_VERSION 8.1.7.0.0

18 rows selected.

tester@TEST_SUN8i> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for Solaris: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

tester@TEST_SUN8i> select * from nls_database_parameters;

PARAMETER VALUE
------------------------------ --------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT dd-mon-rrrr hh24:mi:ss
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_NCHAR_CHARACTERSET UTF8
NLS_RDBMS_VERSION 8.1.7.0.0

18 rows selected.

Tom Kyte
January 30, 2004 - 7:50 pm UTC

it is not stored outside the database. it is stored "out of line" in the database.

I'll have to refer you to support as I do nothing pretty much with java -- (well, as little as humanly possible anyway).

I found the problem....

Sanjay Jha, February 03, 2004 - 3:47 pm UTC

Tom,
Good news is that I figured out the problem and everything is working, the bad news is that I still do not understand the reason clearly.
As I had mentioned before, that I have two identical database with utf8 character set running on Windows 2000 and Sun OS. I logged in to Windows server and first set nls_lang=.utf8 and then exported the entire schema (the dump file was created on the Windows platform from the databse running on Windows). Then during the same session, I imported the data into the database running on the Sun OS. Everything worked fine (including some data having Chinese characters) in the database running on Sun OS, except the CLOB data, which was showing as garbage.

Dump from database running on Sun OS:
DUMP(DBMS_LOB.SUBSTR(JOB_OFFER_DESC,10,1),1016)
--------------------------------------------------------------------------------
--------------------
Typ=1 Len=20 CharacterSet=UTF8:
c5,8c,c5,a1,c4,a0,c5,b4,c5,a9,c5,b4,c5,b5,c5,ac,c5,a1,c5,a9


Dump from database running on Windows OS:
DUMP(DBMS_LOB.SUBSTR(JOB_OFFER_DESC,10,1),1016)
--------------------------------------------------------------------------------
--------------------
Typ=1 Len=10 CharacterSet=UTF8: 4c,61,20,74,69,74,75,6c,61,69

However, I did little different this time, I ftp'd the dump file from Windows platform to Sun box in the binary mode. Then logged on to the Sun OS and imported into the database running on Sun OS. Everything works fine now!

Incidentally, when I tried to export the data from the database running on Windows, while logged on to the Sun OS, I received following error:
EXP-00056: ORACLE error 3120 encountered
ORA-03120: two-task conversion routine: intege
Export terminated successfully with warnings.


As I mentioned before, I resolved the issue in trial and error fashion but not understanding what was happening. May be you can throw some light on this.

Regards.

Tom Kyte
February 03, 2004 - 6:20 pm UTC

if you got an error like that last one -- that would be best for support.

do you have the "header" of the imp session from windows to unix? I'd like to see the character sets in use at that point.

Good english... from an Indian. after a long time.

Zaphod B., February 04, 2004 - 3:08 am UTC


Sanjay Jha, February 05, 2004 - 10:16 am UTC

Tom,
I had opened a TAR already. But your response is much faster ;)
Here is the header of the import log:
*****************************************************************
Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
import done in UTF8 character set and UTF8 NCHAR character set
****************************************************************

Oracle Support is indicating that it had something to do with the buffer size what I had set up and possible solution they are proposing is to reduce that number in a trial and error fashion. Here is what they said:


Solution Explanation
=====================

Specify the BUFFER parameter in the Export/Import command line or in the
Export/Import parameter file. Reduce the value of BUFFER until the error is
resolved.

For example:

If BUFFER is set to 10485760 (10 MB), this will cause Export to buffer 10MB of
data before writing it to the export dump file. This would cause Import to
buffer 10MB of data before actually sending it to the database (write to
datafiles). This becomes a problem when doing Export/Import remotely. 10MB of
data will try to be sent across the network at one time. On the average a
network packet (buffer) size is 1500 bytes. So trying to send 10MB at one time
via 1500 byte packets is causing the overflow to occur.


Tom, could you explain that what is the difference in export/import being done remotely (via SQL*Net connection) and when done locally, in terms of buffer size? Does it have any bearing on the characterset encoding?

Regards,

Sanjay

Tom Kyte
February 05, 2004 - 7:19 pm UTC

i'm skeptical of that one (the buffer size)

Need to contact Mio or Sami

Steve Cover, June 03, 2004 - 10:37 am UTC

Hello- I am trying to get MIO's Java code running and have some questions about it. Can one of you contact me at 410-694-5877 or by email.

Tom Kyte
June 03, 2004 - 1:50 pm UTC

what problem are you having?

Errors with ConnUtil

Steve Cover, June 03, 2004 - 3:51 pm UTC

I am not a Java expert by any means... but need to get the BLOBS out of this 8.1.7 database.
I used the java code posted a while back by MIO and it would not compile properly. I grabbed one of our Java Developers for a few minutes and he modified the way the db connection was done. Still does not work. It compiles but I get a runtime error with the Oracle connection.

Command Line Syntax:
C:\Oracle_Util>java LobUtils -blob -qf query.sql -lf 1.pdf

Error Message:
Exception in thread "main" java.lang.UnsatisfiedLinkError: do_open
at oracle.jdbc.oci8.OCIDBAccess.logon(OCIDBAccess.java)
at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java)
at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.ja
va)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java)
at java.sql.DriverManager.getConnection(DriverManager.java:457)
at java.sql.DriverManager.getConnection(DriverManager.java:159)
at LobUtils.GetConnection(LobUtils.java:156)
at LobUtils.getClob(LobUtils.java:124)
at LobUtils.main(LobUtils.java, Compiled Code)
Code:
// file LobUtils.java
import java.io.InputStream;
import java.io.OutputStream;
import java.io.IOException;
import java.io.FileInputStream;
import java.io.FileOutputStream;

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.Blob;
import oracle.sql.BLOB;
import java.sql.SQLException;

import java.sql.Clob;
import oracle.sql.CLOB;

import java.io.FileReader;
import java.io.BufferedReader;
import java.io.StringWriter;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.Writer;
import java.io.InputStreamReader;
import java.sql.DriverManager;

public class LobUtils
{
final static int bBufLen = 4 * 8192;
String query;
String connectString;
String outFile;
Connection conn;

public LobUtils(String connectString, String query, String outFile) {
this.connectString = connectString;
this.query = query;
this.outFile = outFile;
this.conn = null;
}

public static void main(String[] args)
throws FileNotFoundException, IOException, SQLException {
if (args.length < 5) usage();
int ii = 0;
String connString = args[ii++];
String queryFile = null;
String outputFile = null;

boolean read = true;
boolean isBinary = false;

for (; ii < args.length; ii++) {
if (args[ii].equals("-write"))
read = false;

if (args[ii].equals("-blob"))
isBinary = true;

if (args[ii].equals("-qf") && ii < args.length - 1)
queryFile = args[++ii];

if (args[ii].equals("-lf") && ii < args.length - 1)
outputFile = args[++ii];

}

if (queryFile == null || outputFile == null) usage();

// all set
if (read) {
BufferedReader freader = new BufferedReader(new
FileReader(queryFile));
StringWriter swriter = new StringWriter();
int bufLen = 1024;
char[] cbuf = new char[bufLen];
int length = -1;
while ((length = freader.read(cbuf, 0, bufLen)) != -1) {
swriter.write(cbuf, 0, length);
}
freader.close();
swriter.close();
String query = swriter.toString();

LobUtils lutils = new LobUtils(connString, query, outputFile);
if (isBinary) {
Blob blob = lutils.getBlob();
long wrote = lutils.writeBlobToFile(blob);
System.out.println("Wrote " + wrote + " bytes to file " +
outputFile);
} else {
Clob clob = lutils.getClob();
long wrote = lutils.writeClobToFile(clob);
System.out.println("Wrote " + wrote + " bytes to file " +
outputFile);
}
} else {
BufferedReader freader = new BufferedReader(new
FileReader(queryFile));
StringWriter swriter = new StringWriter();
int bufLen = 1024;
char[] cbuf = new char[bufLen];
int length = -1;
while ((length = freader.read(cbuf, 0, bufLen)) != -1) {
swriter.write(cbuf, 0, length);
}
freader.close();
swriter.close();
String query = swriter.toString();

LobUtils lutils = new LobUtils(connString, query, outputFile);
Clob clob = lutils.getClob();
InputStream creader = new FileInputStream(outputFile);
long wrote = lutils.writeToOraClob(clob, creader);
System.out.println("Wrote " + wrote + " bytes from file " +
outputFile);
}
}

public Clob getClob()
throws SQLException {
conn = GetConnection(connectString);
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
Clob clob = null;
if (rs.next()) {
clob = rs.getClob(1);
}
return clob;
}

public Blob getBlob()
throws SQLException {
conn = GetConnection(connectString);
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
Blob blob = null;
if (rs.next()) {
blob = rs.getBlob(1);
}
return blob;
}

public Connection GetConnection(String connectString) {
try {
Class.forName ("oracle.jdbc.driver.OracleDriver");

String tns = "jdbc:oracle:oci8:dms_user/ciena@pdmaglsc";
String user = "dms_user";
String pwd = "ciena";

return DriverManager.getConnection(tns);
}
catch (Exception e) {
e.printStackTrace();
return null;
}
}

public long writeClobToFile(Clob clob)
throws IOException, SQLException {
long wrote = 0;
BufferedWriter fwriter = new BufferedWriter(new FileWriter(outFile));
wrote = readFromClob(clob, fwriter);
fwriter.close();
conn.commit();
conn.close();
return wrote;
}

public long writeBlobToFile(Blob blob)
throws IOException, SQLException {
long wrote = 0;
OutputStream fwriter = new FileOutputStream(outFile);
wrote = readFromBlob(blob, fwriter);
fwriter.close();
conn.commit();
conn.close();
return wrote;
}

private static void usage() {
System.err.println("Usage: java LobUtils user/passwd@sid [-write] [-blob] -qf query_file -lf lob_file");
System.exit(1);
}

public static long writeToOraBlob(Blob blob, InputStream in)
throws SQLException, IOException {
BLOB oblob = (BLOB)blob;
OutputStream out = oblob.getBinaryOutputStream();
int length = -1;
long wrote = 0;
int chunkSize = oblob.getChunkSize();
byte[] buf = new byte[chunkSize];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
wrote += length;
}
out.close();
return wrote;
}

public long writeToOraClob(Clob clob, InputStream in)
throws SQLException, IOException {
CLOB oclob = (CLOB)clob;
OutputStream out = oclob.getAsciiOutputStream();
int length = -1;
long wrote = 0;
int chunkSize = oclob.getChunkSize();
byte[] buf = new byte[chunkSize];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
wrote += length;
}
out.close();
conn.commit();
return wrote;
}

public static long readFromBlob(Blob blob, OutputStream out)
throws SQLException, IOException {
InputStream in = blob.getBinaryStream();
int length = -1;
long read = 0;
byte[] buf = new byte[bBufLen];
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
read += length;
}
in.close();
return read;
}

public static long readFromClob(Clob clob, Writer out)
throws SQLException, IOException {
BufferedReader in = new BufferedReader(new
InputStreamReader(clob.getAsciiStream()));
int length = -1;
long read = 0;
char[] buf = new char[bBufLen];
while ((length = in.read(buf, 0, bBufLen)) != -1) {
out.write(buf, 0, length);
read += length;
}
in.close();
return read;
}
}


scover@ciena.com

Any Help is good at this point!


Tom Kyte
June 03, 2004 - 4:16 pm UTC

they made it to use the thick oci drivers.

You need to make sure the environment is set properly

- CLASSPATH needs to have $ORACLE_HOME/jdbc/lib/classes12.zip

- you are on windows, so not sure what else.... you need to have your oracle home setup -- eg: if you cannot run sqlplus, you cannot run this.

Errors with ConnUtil

Steve Cover, June 03, 2004 - 4:34 pm UTC

I have the PATH setup to include the the path $ORACLE_HOME/jdbc/lib/classes12.zip but no CLASSPATH on the Windows machine running this program. SQLPLUS runs fine. Something is missing not sure what though. Is there another way I can connect to the database using thinclient technology?

Do you have the "ConnUtil" code specified in the original code?

Steve
scover@ciena.com

Tom Kyte
June 03, 2004 - 7:12 pm UTC

No, i do not.


cannot the java programmer give you a hand? for them, this would be "cake", they should be able to do this in your environment in milliseconds.

Trying another one of the existing approaches already taken

SteveCover, June 04, 2004 - 12:10 pm UTC

All-
I am not having any luck with the JDBC connection. Driving me nuts. I am under the gun to get some sample documents out of this database by COB today.

Can you provide any assistance with a complete solution to pull the files (BLOBS) out. I have the below procedure compiled in the database, it runs, but nothing happens. How can I WRITE THE FILE to disk in the SQL.

None of these solutions posted is straightforward enough for a semi-literate developer like me.
I need some additional detail if possible.

procedure file_download(p_file_id tl_docs.obj_id%type)
as
l_lob tl_docs.blob_obj%type; l_name te_sachnr.obj_id%type;
begin
select f.blob_obj, a.obj_id
into l_lob, l_name
from tl_docs f, te_sachnr a
where f.obj_id = p_file_id;
owa_util.mime_header('application/pdf');
owa_util.http_header_close;
wpg_docload.download_file(l_lob);
end;

scover@ciena.com

Tom Kyte
June 04, 2004 - 12:35 pm UTC



you need to have a client on the recieving end willing to accept the bits and bytes and write them out.


are we allowed to write to the SERVER file system?
do you have java install in the database?

I could probably put together a java stored procedure that does this for you but it would have to write on the SERVERS file system.

8.1.7 upgrade to 9i Blob extract

Steve Cover, June 04, 2004 - 12:24 pm UTC

I had our DBA import this schema with the tables containing BLOBS into a 9i database. After reading many of these pages it "appears" although I am not entirely sure, that in 9i pulling OUT Blob data is much easier. Is this true?

I seem to remember reading an official answer from Tom that indicates BLOBs cannot be pulled out of an 8.1.7 db with just SQL.

Thanks,
Steev
scover@ciena.com


Tom Kyte
June 10, 2008 - 11:32 am UTC

utl_file has the ability to write binary files *sometimes*. There are unfortunate cases where it fails.

and it'll write on the SERVER. not on a client.



BUT -- I just had an idea.... pretty cool one I think to.

I grabbed:

</code> http://www.oracle-base.com/articles/9i/FTPFromPLSQL9i.php <code>

(for 8i, you need to make two changes in ftp.pkb:

raise_application_error( -20001, 'Not implemented' );
-- UTL_FILE.put_raw(l_out_file, l_buffer, TRUE);


comment out the put_raw the two times it is in there and replace with the raise application error).....

and in 8i, if you get:

DECLARE
*
ERROR at line 1:
ORA-29531: no method get_crlf in class oracle/plsql/net/TCPConnection
ORA-06512: at "SYS.UTL_TCP", line 678
ORA-06512: at "SYS.UTL_TCP", line 671
ORA-06512: at "TKYTE.FTP", line 39
ORA-06512: at line 8

when you run this, as SYS or as SYSDBA do this:


tkyte@ORA8I> connect sys/oracle
Connected.
tkyte@ORA8I> @?/rdbms/admin/initplsj.sql




Once you install ftp.pks and ftp.pkb, you can:


tkyte@ORA8I> @z:/home/tkyte/Desktop/test
tkyte@ORA8I> DECLARE
2 l_conn UTL_TCP.connection;
3 l_blob blob;
4 BEGIN
5
6 select longname into l_blob from test_table where rownum = 1;
7
8 l_conn := ftp.login('172.16.95.2', '21', 'tkyte', 'password');
9 ftp.binary(p_conn => l_conn);
10 ftp.put_remote_binary_data
11 (p_conn => l_conn,
12 p_file => '/tmp/foo.dat',
13 p_data => l_blob );
14 ftp.logout(l_conn);
15 utl_tcp.close_all_connections;
16 END;
17 /

PL/SQL procedure successfully completed.


You can actually *ftp* the data to yourself ;)


put that into a procedure with parameters for the blob and the file name and you should be good to go.

Retrieve BLOBS on UNIX using PL/SQL

Steve Cover, June 16, 2004 - 10:33 am UTC

I am now using a PROCEDURE to extract files from a 9i db. I have an expert SQL programmer and a DBA helping me with this. We have not been able to extract more that one file at a time. When using the cursor in the below proc we get the following error:

Error in file: C10000007-A01~PCB AFE~CA1-C10000001---paneldwg---01.pdf
Error #: 100
Error Message: ORA-01403: no data found
End of BLOB
End Time: 16-JUN-04 10:13:25


Help!!
-----------------------------------------------------------

PROCEDURE RET_LOBS IS

temp_blob BLOB;
data_buffer RAW (1);
temp_buffer VARCHAR2(1);
amount BINARY_INTEGER := 1;
position INTEGER := 1;
filehandle utl_file.file_type;
error_number NUMBER;
error_message VARCHAR2(500);
length_count INTEGER;
obj_id VARCHAR2(500);
l_filename VARCHAR2(500);

cursor c_blob is
select
ltrim(tsa.obj_id,'001000')||'~'||
tsa.COMP_NAME||'~'||
ltrim(replace(trd.compdoc_id,':','---'),'11000')||'.'||
tdc.FILETYPE blob_file,
tdc.obj_id
from
tr_doc trd,
te_sachnr tsa,
tl_docs tdc
where
tdc.obj_id = trd.compdoc_id and
trd.catena_doc_id = tsa.obj_id and
tdc.filetype ='pdf'
and tdc.obj_id is not null;

BEGIN

dbms_output.put_line('Start Time: ' || to_char(sysdate,'DD-MON-RR HH24:MI:SS'));


FOR v_blob IN c_blob LOOP

begin
dbms_output.put_line('==================================================================');
dbms_output.put_line('File : '||v_blob.blob_file);
SELECT BLOB_OBJ INTO temp_blob FROM tl_docs WHERE obj_id = v_blob.obj_id and rownum = 1;

length_count := dbms_lob.getlength(temp_blob);

dbms_output.put_line('Internal LOB size is: ' || length_count);

l_filename := v_blob.blob_file;


filehandle := utl_file.fopen('/opensource/dms_user',l_filename,'W',32767);
WHILE length_count <> 0 LOOP

dbms_lob.read (temp_blob, amount, position, data_buffer);

temp_buffer := utl_raw.cast_to_varchar2(data_buffer);

utl_file.put(filehandle, temp_buffer);

utl_file.fflush(filehandle);

position := position + 1;

length_count := length_count - 1;

data_buffer := null;

END LOOP;

dbms_output.put_line('Close the file');
utl_file.fclose(filehandle);

--end if;

exception
when others then
dbms_output.put_line('Error in file: '||l_filename);
error_number := sqlcode;
error_message := substr(sqlerrm ,1 ,100);
dbms_output.put_line('Error #: ' || error_number);
dbms_output.put_line('Error Message: ' || error_message);
utl_file.fclose(filehandle);

end;

END LOOP; --c_blob

dbms_output.put_line('End of BLOB');

dbms_output.put_line('End Time: ' || to_char(sysdate,'DD-MON-RR HH24:MI:SS'));

EXCEPTION
WHEN NO_DATA_FOUND then
dbms_output.put_line('Error in file: '||l_filename);
WHEN OTHERS THEN
BEGIN
error_number := sqlcode;
error_message := substr(sqlerrm ,1 ,100);
dbms_output.put_line('Error #: ' || error_number);
dbms_output.put_line('Error Message: ' || error_message);
utl_file.fclose_all;
END;
END;

Thanks, Steve Cover
scover@ciena.com

Tom Kyte
June 16, 2004 - 1:07 pm UTC

that is not going to work -- trying to trick it out with binary data. utl_file is going to corrupt the data.

recommend you try the above approach using ftp.

what is wrong here?

A reader, June 27, 2004 - 2:15 pm UTC

Trying to retrieve a clob from Java for writing
(92 and 10g - same problems)

--schema
scott@ORA92I> drop table clob_table;
drop table clob_table
*
ERROR at line 1:
ORA-00942: table or view does not exist


scott@ORA92I> create table clob_table
2 (
3 clob_col clob
4 );

Table created.

scott@ORA92I>
scott@ORA92I> insert into clob_table (clob_col)
2 values ( 'A clob example' );

1 row created.

scott@ORA92I>
scott@ORA92I> commit;

Commit complete.

scott@ORA92I>
scott@ORA92I> select clob_col from clob_table;

A clob example
----- Java program

// importing standard JDBC classes under java.sql class hierarchy
import java.sql.*;
import java.io.*;

//import oracle specific classes
import oracle.jdbc.*;
import oracle.sql.*;

class Bug
{
public static void main(String args[])
{
Connection conn = null;
PreparedStatement pstmt = null;
ResultSet rset = null;
try
{
DriverManager.registerDriver( new OracleDriver());

String thinDriverURL = "jdbc:oracle:thin:@my_host:1521:ora92i";
conn = DriverManager.getConnection ( thinDriverURL, "scott",
"tiger" );
String stmtString =
"select clob_col from clob_table";
pstmt = conn.prepareStatement( stmtString );

pstmt.execute();
rset = pstmt.getResultSet();

while( rset.next() )
{
Clob clob = rset.getClob( 1 );
Writer writer= ((CLOB) clob).setCharacterStream(1L);
writer.close();
}
}
catch (Exception e)
{
// handle the exception properly - in this case, we just
// print a message and stack trace.
e.printStackTrace();
}
finally
{
// release resources associated with JDBC
// in the finally clause.
try
{
if( conn != null) { conn.close();}
}
catch( Exception e) {}
}
}
}
-------
two questions:
1- Why do I get Unsupported feature exception
when I run this program - the documentation
says that setCharacterStream() is supported?
2. if I change the select to have a "for update"
I get a "fetch out of sequence" in 9i..Why?



Tom Kyte
June 28, 2004 - 7:47 am UTC

1) the 9ir2 docs state this is an unsupported feature.

2) java intelligently "auto commits" (sarcasm dripping from my tongue on that one). How an "enterprise" application development language could have that as the default is way beyond me. The for update was committed on you, invalidating the cursor. set autocommit to false should be the line of code after connect in any real java program.

re: above question

A reader, June 27, 2004 - 2:29 pm UTC

well, for 10g - i can use getCharacterOutputStream()
and make it work (with a for update clause)..
...

// importing standard JDBC classes under java.sql class hierarchy
import java.sql.*;
import java.io.*;

//import oracle specific classes
import oracle.jdbc.*;
import oracle.sql.*;

class Bug
{
public static void main(String args[])
{
Connection conn = null;
PreparedStatement pstmt = null;
ResultSet rset = null;
try
{
DriverManager.registerDriver( new OracleDriver());

String thinDriverURL = "jdbc:oracle:thin:@my_host:1521:ora92i";
conn = DriverManager.getConnection ( thinDriverURL, "scott",
"tiger" );
String stmtString =
"select clob_col from clob_table";
pstmt = conn.prepareStatement( stmtString );

pstmt.execute();
rset = pstmt.getResultSet();

while( rset.next() )
{
Clob clob = rset.getClob( 1 );
Writer writer= ((CLOB) clob).setCharacterStream(1L);
writer.close();
}
}
catch (Exception e)
{
// handle the exception properly - in this case, we just
// print a message and stack trace.
e.printStackTrace();
}
finally
{
// release resources associated with JDBC
// in the finally clause.
try
{
if( conn != null) { conn.close();}
}
catch( Exception e) {}
}
}
}
...

But for 9i - this still gives fetch out of sequence
error - and if I dont have the "for update" it says
that the row contianing the lob value is not locked!:)

thanx Tom!

A reader, June 28, 2004 - 10:47 am UTC

the autocommit must be the issue - I completely
forgot about it - yes it is a terrible default
to have - but I guess as with the case with
defaults - once you have it - it is not easy to change it:)


Tom Kyte
June 28, 2004 - 10:51 am UTC

i just wish the "inventor" of jdbc hadn't copied odbc (ms's native database api), wish they would have asked a database person for some advice...


agree!:)

A reader, June 28, 2004 - 11:39 am UTC

Yup they ended up following the odbc model which
does not make sense from a transaction perspective:)

btw, I got the above error from the 10g doc...
( 9i doc shows the correct usage of getCharacterOutputStream()...)

but 10g doc gives the following example...
// write the array of character data to a CLOB
writer = ((CLOB)my_clob).setCharacterStream();
...

This does not compile and I assumed that they
meant setCharacterStream( long pos) method which
gave the Unsupported feature error..

Thank you!


Different types of doc stored in BLOB

Kasturi, June 30, 2004 - 11:18 am UTC

Tom,

The BLOB is holding different types of documents such as .xls, .doc, .jiff etc in one column. I need to retrieve the contents of the blob in appropriate format. For eg. A Word document should be retrieved as .Doc, A Excel should be .Xls etc. With your C program we could retrieve the first 40 bytes which gives us what kind of document it is but still we are not able to catch all the types. what is the best way to do it?

Thanks,

Tom Kyte
June 30, 2004 - 12:15 pm UTC

you should have a column then that is your mime type or at least "filename" -- your program would read this column and generate the appropriate filename to save to.

the first 40 bytes isn't going to tell you much

reading bfile as plain ascii characters

A reader, July 05, 2004 - 2:12 pm UTC

Tom,
If the bfile points to a text file and I try
to use the bfile's asciiStreamValue() method to
get it as a stream of ascii characeters, it still
seems to return binary stream (as bytes.) Objective
is to print the characters as ascii text back from the
bfile using JDBC. I was able to do it using
dbms_lob - but if I try to use JDBC's stream functions
I am not able to get it done...Of course one way
is to use dbms_lob from JDBC..

Any ideas?
Thank you!


Tom Kyte
July 05, 2004 - 2:20 pm UTC

convert the byte array into a string. the b in bfile stands for "binary". you'll have to conver the binary bytes into ascii in java.

You could (perhaps 'should') use utl_file, designed to read text files from the database server.

re: query aboe - ok I got it

A reader, July 05, 2004 - 2:21 pm UTC

You use getBinaryStream() - then you cast each byte
into char to print it out...found an example in the
Oracle JDBC guide..
---
bfile.openFile();
// get the BFILE data as a binary stream
InputStream in = bfile.getBinaryStream();
int length ;
// read the bfile data in 6-byte chunks
byte[] buf = new byte[6];
while ((length = in.read(buf)) != -1)
{
// append and display the bfile data in 6-byte chunks
StringBuffer sb = new StringBuffer(length);
for (int i=0; i<length; i++)
sb.append( (char)buf[i] );
System.out.println(sb.toString());
}
---

Thanx!

thanx Tom!

A reader, July 05, 2004 - 2:30 pm UTC

You are too fast! - while I was typing above response
you were typing the same answer:)

Anyways one small addition - instead of using
the StringBuffer() in a loop, one can also use the
the String overloaded constructor as follows..
(orset = OracleResultSet variable)
----
while( orset.next() )
{
BFILE bfile = orset.getBfile( 1 );
bfile.openFile();
InputStream in = bfile.getBinaryStream();

while (( length = in.read( buffer)) != -1 )
{
System.out.print( new String( buffer, 0,
length ) );
}
}


Tom Kyte
July 05, 2004 - 2:42 pm UTC

or, you could just create an external table and select * from it as well...

thanx!

A reader, July 05, 2004 - 2:55 pm UTC

yes - external tables is definitely another option...


although,

A reader, July 05, 2004 - 3:16 pm UTC

on second thoughts external tables are perhaps more
suited for "structured" data - for unstructured data
such as a free format text - perhaps it does not
offer any advantages. In fact LOBs would be a better
alternative since they can be read piece-wise/randomly
in a natural fashion...If your data is structured
and follows a "queriable" format - then external tables
might be a better option (again depends on your
requirements.)

Tom Kyte
July 05, 2004 - 4:32 pm UTC

if you are reading an ascii text file, it is a structured file -- it is just text terminated by a newline. a simple table with a varchar2(4000) probably handles most all needed cases and could be much easier than writing code -- you already have the directory object -- a simple create table and then you just "select * from t" -- no byte conversion, no streams, just a simple result set.

yes but

A reader, July 05, 2004 - 9:05 pm UTC

you can not do stream operations such as seek/piecewise
fetch etc easily on the external tables/result set...plus if you declare a column of varchar2(4000), the client has to pre-allocate that much for each row in the prefetch
(for example, if you prefetch 15 rows, you have to
preallocate 15*4000 bytes (even if each row
consists of only 1 character say for an extreme
example)...So there may be performance issues...
Having said that I do see your point of 4000 being
ok for most cases...

Tom Kyte
July 05, 2004 - 10:16 pm UTC

or use a reasonable data processing language such as plsql :)



hmm...

A reader, July 05, 2004 - 10:31 pm UTC

I am a fan of PL/SQL butwhat if we need to select chunks of clob data from database in JDBC layer? :) (Come on Tom
you can have someone else have a last word once in a while,
eh?;))


ftp.pkb

A reader, July 07, 2004 - 3:32 pm UTC

where can i find this ftp.pkb?

Tom Kyte
July 07, 2004 - 4:17 pm UTC

from the link above??? I have the URL to the site that owns that package right there.

Extracting Files using UTL_FILE

Steve Cover, July 08, 2004 - 2:01 pm UTC

This code worked very well for me extracting BLOBS directly connected to a 9i database on a UNIX server. The UTL_FILE setting needs to be setup on the database your extracting from. Thanks to all who helped me get to this...
Steve


PROCEDURE RET_CC0A_PDF IS

temp_blob BLOB;
data_buffer RAW (1);
temp_buffer VARCHAR2(1);
amount BINARY_INTEGER := 1;
position INTEGER := 1;
filehandle utl_file.file_type;
error_number NUMBER;
error_message VARCHAR2(500);
length_count INTEGER;
obj_id VARCHAR2(500);
l_filename VARCHAR2(500);

cursor c_blob is

SELECT
substr(a.obj_id, 7)||'--'|| --PN
substr(a.her, 7)||'--'|| --MFGR
a.katalog||'--'|| --MFG_PART_NUM
replace(substr(a.doc_id, 7),':',' ')||'.'|| --DOC_ID
b.filetype blob_file, -- FILE_EXT
b.obj_id
FROM tr_her_snr a,
tl_docs b
WHERE -- PDF FILE EXTRACTIONS
a.doc_id = b.obj_id
and a.doc_ID LIKE '%datasheet%'
and (a.obj_id like '%CC0%' ) -- 5733 files to extract
and b.filetype ='pdf'
and substr(a.her, 7) like 'A%';

BEGIN

dbms_output.put_line('Start Time: ' || to_char(sysdate,'DD-MON-RR HH24:MI:SS'));

FOR v_blob IN c_blob LOOP

begin

dbms_output.put_line('==================================================================');
dbms_output.put_line('File : '||v_blob.blob_file);
SELECT BLOB_OBJ INTO temp_blob FROM tl_docs WHERE obj_id = v_blob.obj_id and rownum = 1;

length_count := dbms_lob.getlength(temp_blob);

dbms_output.put_line('Internal LOB size is: ' || length_count);

l_filename := v_blob.blob_file;


filehandle := utl_file.fopen('/opensource/dms_user/pdfs/cc0/a',l_filename,'W',32767);
data_buffer := null;
temp_buffer := null;
amount := 1;
position := 1;
WHILE length_count <> 0 LOOP

dbms_lob.read (temp_blob, amount, position, data_buffer);

temp_buffer := utl_raw.cast_to_varchar2(data_buffer);

utl_file.put(filehandle, temp_buffer);

utl_file.fflush(filehandle);

position := position + 1;

length_count := length_count - 1;

data_buffer := null;


END LOOP;

dbms_output.put_line('Close the file');
utl_file.fclose(filehandle);

--end if;

exception
when others then
dbms_output.put_line('Error in file: '||l_filename);
error_number := sqlcode;
error_message := substr(sqlerrm ,1 ,100);
dbms_output.put_line('Error #: ' || error_number);
dbms_output.put_line('Error Message: ' || error_message);
utl_file.fclose(filehandle);

end;

END LOOP; --c_blob

dbms_output.put_line('End of BLOB');

dbms_output.put_line('End Time: ' || to_char(sysdate,'DD-MON-RR HH24:MI:SS'));

EXCEPTION
WHEN NO_DATA_FOUND then
dbms_output.put_line('Error in file: '||l_filename);
WHEN OTHERS THEN
BEGIN
error_number := sqlcode;
error_message := substr(sqlerrm ,1 ,100);
dbms_output.put_line('Error #: ' || error_number);
dbms_output.put_line('Error Message: ' || error_message);
utl_file.fclose_all;
END;
END;


Tom Kyte
July 08, 2004 - 3:38 pm UTC

as pink floyd likes to sing "careful with that axe eugene"


o data streams that continue for more than 32k characters without a newline will blow this up

o data streams including nastiness like binary zero'es and who knows what else -- bamm.

o newline added to end of file...


that is not a reliable routine for binary data at all.

consider these two binary objects:

ops$tkyte@ORA9IR2> declare
  2          l_raw  raw(256);
  3          l_blob blob;
  4  begin
  5          for i in 0 .. 255
  6          loop
  7                  l_raw := l_raw || hextoraw( to_char(i,'fm0X') );
  8          end loop;
  9          insert into t values ( l_raw );
 10          insert into t values ( l_raw ) returning x into l_blob;
 11
 12          l_raw := null;
 13          for i in 1 .. 255
 14          loop
 15                  l_raw := l_raw || hextoraw( 'fe' );
 16          end loop;
 17
 18          for i in 1 .. 200
 19          loop
 20                  dbms_lob.writeappend( l_blob, 255, l_raw );
 21          end loop;
 22          commit;
 23  end;
 24  /
 
PL/SQL procedure successfully completed.
 


Now, using your procedure to unload those two we find:

ops$tkyte@ORA9IR2> exec RET_CC0A_PDF
Internal LOB size is:  256
Close the file
Internal LOB size is:  51256
BEGIN RET_CC0A_PDF; END;
 
*
ERROR at line 1:
ORA-29285: file write error
ORA-06512: at "SYS.UTL_FILE", line 18
ORA-06512: at "SYS.UTL_FILE", line 993
ORA-06512: at "OPS$TKYTE.RET_CC0A_PDF", line 30
ORA-06512: at line 1


The first extracted 256 bytes -- but:

 
ops$tkyte@ORA9IR2> !ls -l /tmp/foo.dat.1
-rw-rw-r--    1 ora9ir2  ora9ir2       257 Jul  8 15:20 /tmp/foo.dat.1
 

it is 257 bytes in the file system and od shows:


ops$tkyte@ORA9IR2> !od -c /tmp/foo.dat.1
0000000  \0 001 002 003 004 005 006  \a  \b  \t  \n  \v  \f  \r 016 017
0000020 020 021 022 023 024 025 026 027 030 031 032 033 034 035 036 037
0000040       !   "   #   $   %   &   '   (   )   *   +   ,   -   .   /
0000060   0   1   2   3   4   5   6   7   8   9   :   ;   <   =   >   ?
0000100   @   A   B   C   D   E   F   G   H   I   J   K   L   M   N   O
0000120   P   Q   R   S   T   U   V   W   X   Y   Z   [   \   ]   ^   _
0000140   `   a   b   c   d   e   f   g   h   i   j   k   l   m   n   o
0000160   p   q   r   s   t   u   v   w   x   y   z   {   |   }   ~ 177
0000200 200 201 202 203 204 205 206 207 210 211 212 213 214 215 216 217
0000220 220 221 222 223 224 225 226 227 230 231 232 233 234 235 236 237
0000240 240 241 242 243 244 245 246 247 250 251 252 253 254 255 256 257
0000260 260 261 262 263 264 265 266 267 270 271 272 273 274 275 276 277
0000300 300 301 302 303 304 305 306 307 310 311 312 313 314 315 316 317
0000320 320 321 322 323 324 325 326 327 330 331 332 333 334 335 336 337
0000340 340 341 342 343 344 345 346 347 350 351 352 353 354 355 356 357
0000360 360 361 362 363 364 365 366 367 370 371 372 373 374 375 376 377
0000400  \n
0000401

newline added to the end.  

The second issue is a problem -- no newline = no output.....


I would not use this routine for binary data.

Also, the exception blocks without a RAISE!!! that's a bug.  error messages are way to easy to just read over -- you want a real error raised (i chopped them all out for this example)
 

utl_raw.cast_to_varchar2

A reader, July 10, 2004 - 3:52 pm UTC

works fine in pl/sql as shown above..Any idea why
it does not print characters when executed from
sql in a select statement?

Thanx!

re: above query

A reader, July 10, 2004 - 4:16 pm UTC

sorry it does work!:)

can we use external tables to

A reader, July 11, 2004 - 2:26 pm UTC

read binary files? I know we have other ways of
doing this - but I am just curious...

Tom Kyte
July 12, 2004 - 11:01 am UTC

bfile's are what I would use -- then you have the power or the DBMS_LOB package on them

benchmarking BFILE vs external table (in 10g r1)

Menon, July 11, 2004 - 6:05 pm UTC

Tom
Interesting discussion on bfile versus
external tables..

I did a small benchmark to compare reading
from bfile to that with external table.
file had 10K lines - total 198894 characters -
an average of around 20 characters per line.

Took care to open and prepare statements
only once in both cases..

In the first run:
For external table - I used normal result set based
select (default prefetch of 10 records)

For bfile - I used standard functionality with a
buffer of (prefetch * average row size) = 10*20
to hopefully make the comparison fair. Bfile was
first inserted in a simple table and then there was
a select of bfile column in the JDBC becnhmark.

Following were the results:
Bfile: took an average of 54 millisecs to read 10K records
External table: took an average of 958 ms to read 10K
records.

After this I changed the setting of external table
select to prefetch 500 records at a time (instead
of the default of 10). Also set the Bfile buffer
size to 500*20

The results were:

Bfile: took an average of 53 millisecs to read 10K records
External table: took an average of 321 ms to read 10K
records.

From the benchmark results, it seems bfiles
are faster by a huge margin (at least in above cases:))
- and it is at least worth benchmarking in your particular application before making a choice taking into consuderation ease of use, performance etc in your particular case.


Tom Kyte
July 12, 2004 - 11:07 am UTC

what do you mean by reading "10k records from a bfile"? Not really sure what you were comparing here.

Re: benchmarking BFILE vs external table (in 10g r1)

Menon, July 12, 2004 - 11:19 am UTC

"what do you mean by reading "10k records from a bfile"? Not really sure what you were comparing here.
"
Sorry I was not clear.
I created a file containing 10K *lines* that looked like:
my line number 1
my line number 2
..
my line number 10000

-----
I then inserted a bfile column in a table pointing
to this file. I also created an external table
(with "records delimited by newline" option.

Then from JDBC I tried to select all lines
as mentioned above using bfile and
external table..
Hope I am clear now..

Thanx!






Tom Kyte
July 12, 2004 - 11:50 am UTC

and how did you "select lines from a bfile"

I'm suspecting you just benchmarked:

a) retrieving 10k lines of data using an external table

versus

b) retrieving a bfile lob locator of a couple of bytes in size.

OK...

Menon, July 12, 2004 - 12:11 pm UTC

"a) retrieving 10k lines of data using an external table"

Correct

"b) retrieving a bfile lob locator of a couple of bytes in size. "

Nopes -> I retrieved the locator and the data in
a loop (verified that the data is actually
being retrieved...
--something like:
while( orset.next() )
{
bfile = orset.getBfile( 1 );
bfile.openFile();
in = bfile.getBinaryStream();
numOfCharacersRead = 0;

while (( length = in.read( buffer, 0, buffer.length )) != -1 )
{
...
}
...
I actually mentioned above the logic of buffer
size as well (prefetch size for external table
case * number of characters per row)

let me know if i missed anything...

Thanx!

Tom Kyte
July 12, 2004 - 8:26 pm UTC

lets see the entire benchmark program. I'd be interested.

OK here you go...

Menon, July 12, 2004 - 10:26 pm UTC

I had to edit it to make it in one program...I have
some suspicions on the results also... that I
detail at the end of this post...

First the schema....
-------
benchmark@ORA10G> -- NOTE: You need to replace the directory and file with
benchmark@ORA10G> -- a real directory in your machine for all of
benchmark@ORA10G> -- these examples to work.
benchmark@ORA10G>
benchmark@ORA10G> create or replace directory my_dir as '/private/ora92/book/ch09';

Directory created.

benchmark@ORA10G>
benchmark@ORA10G>
benchmark@ORA10G> -- the following writes to a file creating the
benchmark@ORA10G> -- files that will be used in benchmarking reads
benchmark@ORA10G>
benchmark@ORA10G> declare
2 l_buffer varchar2(32767);
3 l_file_handle utl_file.file_type;
4 begin
5 -- open the file in write mode -- create one if
6 -- file does not exist.
7
8 l_file_handle := utl_file.fopen( 'MY_DIR', 'benchmark_input.txt', 'W',
9 256 );
10 for i in 1 .. 10000
11 loop
12 utl_file.put_line( l_file_handle, 'my line number ' || i );
13 end loop;
14 utl_file.fclose( l_file_handle );
15 exception
16 when others then
17 raise;
18 end;
19 /

PL/SQL procedure successfully completed.

benchmark@ORA10G> drop table bfile_table;

Table dropped.

benchmark@ORA10G> create table bfile_table
2 (
3 x varchar2(30),
4 id number,
5 bfile_col bfile
6 );

Table created.

benchmark@ORA10G> insert into bfile_table( x, id, bfile_col )
2 values ( 'benchmark text file', 1,
3 bfilename( 'MY_DIR', 'benchmark_input.txt' ));

1 row created.

benchmark@ORA10G> commit;

Commit complete.

benchmark@ORA10G>
benchmark@ORA10G> -- create an external table that points to the
benchmark@ORA10G> -- benchmark_input.txt file - note that
benchmark@ORA10G> -- we specify that records are delimited
benchmark@ORA10G> -- by newlines top indicate a "free format
benchmark@ORA10G> -- text file
benchmark@ORA10G>
benchmark@ORA10G> drop table et_table;

Table dropped.

benchmark@ORA10G> create table et_table
2 (
3 data varchar2(4000)
4 )
5 organization external
6 (
7 type oracle_loader
8 default directory my_dir
9 access parameters
10 (
11 records delimited by newline
12 )
13 location( 'benchmark_input.txt' )
14 );

Table created.

benchmark@ORA10G>
benchmark@ORA10G> -- verify the count of records in external table
benchmark@ORA10G>
benchmark@ORA10G> select count(*) from et_table;

10000

benchmark@ORA10G> spool off


Then the JDBC program...
-------------------

/*
* IMPORTANT: You need to modify the connection information
* to point to your own database for this example to work.
*
*/


// importing standard JDBC classes under java.sql class hierarchy
import java.sql.*;
import java.io.*;

//import oracle specific classes
import oracle.jdbc.*;
import oracle.sql.*;

class CompareReadUsingBfileAndExternalTable
{
public static void main(String args[])
{
_checkProgramUsage( args );
Connection conn = null;
try
{
conn = _getConnection("benchmark", "benchmark",
args[0]);

_prepareBenchmarkStatements( conn );
_runBenchmark( BFILE_DESC , conn );
_runBenchmark( EXTERNAL_TABLE_DESC , conn );
}
catch (Exception e)
{
// handle the exception properly - in this case, we just
// print a message and stack trace.
e.printStackTrace();
}
finally
{
// release resources associated with JDBC
// in the finally clause.
try
{
_closeBenchmarkStatements( conn );
conn.close();
}
catch (Exception e) { }
}
}

private static void _runBenchmark( String benchmarkDesc,
Connection conn )
throws SQLException, IOException
{
System.out.println("In _runBenchmark for " + benchmarkDesc);

// find out how much time a single run takes
long startTime = System.currentTimeMillis();
long numOfCharactersRead = 0;
if( BFILE_DESC.equals( benchmarkDesc ) )
numOfCharactersRead = _readUsingBfile( conn );
else
numOfCharactersRead = _readUsingExternalTable( conn );

long endTime = System.currentTimeMillis();

// find out the number of runs that would consume 5 minutes

int numOfRuns = (int)((5 * 60 * 1000)/( endTime-startTime ));
System.out.println( benchmarkDesc + ": time for first run = " +
(endTime-startTime) );
System.out.println( benchmarkDesc + ": num of runs = " +
numOfRuns );

// if num of runs = 0 - print a message and reset it to 1
if( numOfRuns == 0 )
{
System.out.println( "One run took more than 5 minutes." );
numOfRuns = 1;
}

startTime = System.currentTimeMillis();

for(int i=0; i < numOfRuns; i++ )
{
if( BFILE_DESC.equals( benchmarkDesc ) )
numOfCharactersRead = _readUsingBfile( conn );
else
numOfCharactersRead = _readUsingExternalTable( conn );
}

endTime = System.currentTimeMillis();

long averageRunTime = (endTime-startTime)/numOfRuns;
System.out.println( benchmarkDesc + " took " +
averageRunTime + " ms for reading " +
numOfCharactersRead + " characters\n\n");
}

/*
demos how to read from a text file from the database as an ascii
file using bfile. returns number of characters read
*/

private static long _readUsingBfile( Connection conn )
throws SQLException, IOException
{
OracleResultSet orset = null;
BFILE bfile = null;
InputStream in = null;
long numOfCharacersRead = 0;
try
{
_bfilePstmt.setInt( 1, 1 );
orset = (OracleResultSet) _bfilePstmt.executeQuery();
byte[] buffer = new byte[ _bfileBufferSize];
int length = -1;
while( orset.next() )
{
bfile = orset.getBfile( 1 );
bfile.openFile();
in = bfile.getBinaryStream();
numOfCharacersRead = 0;

while (( length = in.read( buffer, 0, buffer.length )) != -1 )
{
//System.out.print( new String( buffer, 0, length ) );
numOfCharacersRead += length;
}
//System.out.println("Bfile: num of chars read: " + numOfCharacersRead);
}
}
finally
{
if( in != null )
in.close();
if( bfile != null )
bfile.closeFile();
try
{
orset.close();
}
catch (Exception e) { }
}

return numOfCharacersRead;
}

/*
demos how to read from a text file from the database as an ascii
file.
*/

private static long _readUsingExternalTable( Connection conn )
throws SQLException, IOException
{
ResultSet rset = null;
long numOfCharacersRead = 0;
try
{
rset = _externalTablePstmt.executeQuery();
numOfCharacersRead = 0;
while( rset.next() )
{
String line1 = rset.getString(1);
numOfCharacersRead += line1.length();
}
//System.out.println("External Table: num of chars read: " + numOfCharacersRead);
}
finally
{
try
{
rset.close();
}
catch (Exception e) { }
}
return numOfCharacersRead;
}

private static void _prepareBenchmarkStatements( Connection conn )
throws SQLException
{
String stmtString = "select data from et_table";
_externalTablePstmt = conn.prepareStatement( stmtString );
((OraclePreparedStatement)_externalTablePstmt).setRowPrefetch(
_prefetchSize );

stmtString = "select bfile_col from bfile_table "+
" where id = ?";
_bfilePstmt = conn.prepareStatement( stmtString );
}

private static void _closeBenchmarkStatements( Connection conn )
throws SQLException
{

try
{
_bfilePstmt.close();
_externalTablePstmt.close();
}
catch (Exception e) { }
}

private static Connection _getConnection( String username,
String password, String dbName)
throws SQLException
{
DriverManager.registerDriver( new OracleDriver());

String thinDriverURL =
"jdbc:oracle:thin:@usunrat24:1521:"+ dbName;
Connection connection =
DriverManager.getConnection ( thinDriverURL, username, password);
connection.setAutoCommit( false );

return connection;
}

private static void _checkProgramUsage( String[] args )
{
if( args.length != 1 )
{
System.out.println(" Usage: java <program_name> <database_name>");
System.exit(1);
}
}

private static final String BFILE_DESC = "Using Bfile";
private static final String EXTERNAL_TABLE_DESC =
"Using external table";

private static PreparedStatement _bfilePstmt;
private static PreparedStatement _externalTablePstmt;

private static int _prefetchSize = 20;
private static int _bfileBufferSize = _prefetchSize * 20;
}
-----
then the somewhat suspicious results
------

usunrat24 ora10g 873> java CompareReadUsingBfileAndExternalTable ora10g
In _runBenchmark for Using Bfile
Using Bfile: time for first run = 250
Using Bfile: num of runs = 1200
Using Bfile took 56 ms for reading 198894 characters


In _runBenchmark for Using external table
Using external table: time for first run = 1167
Using external table: num of runs = 257
Using external table took 937 ms for reading 188894 characters
.......

And finally some comments...
1. Notice that the character count is off
by the number of lines in the file - most
likely a newline is not getting counted...

This is most likely because in bfile you actually
read the file whereaas in the select from external
table since the separator is new line - it is
not returned as the data by the select...

2. The first run of bfile takes around 250 ms..
but subsequent runs are suspiciously fast (this is not due
to parsing) - May be some kind of optimization
or may be there is a bug in the program. Perhaps
related to the way file is read (if there is no bug)

So hopefully all mysteries will be solved
once you have gone through the program :-)

btw, the reason I take an average is to
get the JVM to a steady state - this is a
better way of measuring performance when
you use only elapsed time...

PS: Thanx for this wonderful site - just finished
explaining to a disbelieving colleague about
your site and the fact there is only one
tom that handles almost all these questions
on this site (and that it is not your full
time job!)!:)


LOB SIZE INCREASES

abc, July 15, 2004 - 4:46 am UTC

HI TOM

I am facing a problem i.e
A table is having column clob and its szie get increse 46gb in one hour.
Its really bothering me.
Lob tablespace inital and next extent are 32 mb
data tablespace ( table segemnt) initial and next extents 4 mb

please suggest


i have doubt as cout of table is 52 million is it a reason or something wrong with lob



Tom Kyte
July 15, 2004 - 12:22 pm UTC

there is insufficient detail here to say anything.

did you get a chance to look at the benchmark?

Menon, July 15, 2004 - 11:43 am UTC

that I posted above - I know you are on vacation - but
I am curious on your thoughts...(Don't mean to hurry you
of course..)

Thanx!

Tom Kyte
July 15, 2004 - 1:28 pm UTC

i've bookmarked it, will look at it later -- just doing quick answers this week :)

No problems - thanx! :)

Menon, July 15, 2004 - 2:56 pm UTC


Bad Blobs using UTL_FILE

Steve Cover, September 01, 2004 - 2:17 pm UTC

Using the code below I have successfully extracted 6500 BLOBS from a 9i database.
The BLOBS for the most part are all good valid complete files. However, I have a small number "250" ,that have not extracted properly (the files are incomplete). Only part of the file was pulled from the database. This test is repeatable in re-testing on a bad file and always stops at the same byte count. 
There has been some discussion as to whether the "UTL_FILE method" is the right way to do these type of binary file extracts. If not what is the best way. The code I am using is originally off of METALINK so should be OK ???
Thanks,
Steve

CODE:
 dbms_output.put_line('File : '||v_blob.blob_file);
    SELECT BLOB_OBJ INTO temp_blob FROM tl_docs WHERE obj_id = v_blob.obj_id and 
rownum = 1;

    length_count := dbms_lob.getlength(temp_blob);

    dbms_output.put_line('Internal LOB size is:  ' || length_count);

    l_filename := v_blob.blob_file;


    filehandle := utl_file.fopen('/opensource/dms_user',l_filename,'W',32767);
    WHILE length_count <> 0 LOOP

       dbms_lob.read (temp_blob, amount, position, data_buffer);

       temp_buffer := utl_raw.cast_to_varchar2(data_buffer);

       utl_file.put(filehandle, temp_buffer);

       utl_file.fflush(filehandle);

       position := position + 1;

       length_count := length_count - 1;

       data_buffer := null;

    END LOOP;

    dbms_output.put_line('Close the file');
    utl_file.fclose(filehandle);

ERROR:
SQL> execute ret_tst
Start Time: 01-SEP-04 11:22:02
==================================================================
File : CC0000008~AMD~1237661~AM29LV640DU90WHI~StatA~Doc1~AM29LV640DU90 datasheet
01.pdf
Internal LOB size is:  1237661
Error in file: CC0000008~AMD~1237661~AM29LV640DU90WHI~StatA~Doc1~AM29LV640DU90
datasheet 01.pdf
Error #: -29285
Error Message: ORA-29285: file write error
==================================================================
File : CC0000008~AMD~1237661~AM29LV640DU90RWHI~StatA~Doc1~AM29LV640DU90
datasheet 01.pdf
Internal LOB size is:  1237661
Error in file: CC0000008~AMD~1237661~AM29LV640DU90RWHI~StatA~Doc1~AM29LV640DU90
datasheet 01.pdf
Error #: -29285
Error Message: ORA-29285: file write error
End of BLOB
End Time:  01-SEP-04 11:51:47

PL/SQL procedure successfully completed. 

Tom Kyte
September 01, 2004 - 2:47 pm UTC

you definitely cannot use utl_file for this in the fashion you have. you would have to use the RAW interface if any at all. You can take a blob and say "plsql please consider this a varchar2" but utl-file is going to treat it as a string and things like binary zeros, lines without a newline -- they are all going to give it issues (that and you'll find spurious newlines at the end)



I personally have found the ftp trick above to work flawlessly, quickly, with the least amount of coding.

DBA_JOBS

Nishant Choudhary, September 02, 2004 - 10:48 am UTC

I have modified the timing of a job (Materialized view) through this command.

SQL>exec dbms_job.next_date(173,sysdate+12/24);

it completed successfully.
when i reweived through dba_jobs .

       JOB LOG_USER                       PRIV_USER
---------- ------------------------------ ------------------------------
SCHEMA_USER                    LAST_DATE LAST_SEC THIS_DATE THIS_SEC NEXT_DATE
------------------------------ --------- -------- --------- -------- ---------
NEXT_SEC TOTAL_TIME B
-------- ---------- -
INTERVAL
--------------------------------------------------------------------------------
  FAILURES
----------
WHAT
--------------------------------------------------------------------------------
NLS_ENV
--------------------------------------------------------------------------------
MISC_ENV                                                           INSTANCE
---------------------------------------------------------------- ----------
       173 CALLCENTER                     CALLCENTER
CALLCENTER                     02-SEP-04 14:33:38                    03-SEP-04
05:30:45       2831 N
sysdate + 24/24
         0
dbms_refresh.refresh('"CALLCENTER"."CMSS_BILL_INFO"');
NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENC
Y='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LAN
GUAGE='AMERICAN' NLS_SORT='BINARY'
0102000200000000                                                          0

and when i checked it through some other machine it shown no changes in the schedule...
i am really confused why is it so ??

Why dba_jobs is showing different output when fired from  differnet terminal even though i am connecting to same user in the same database .....

Looking for ur reply... sorry i am puting my query in the review part as i didnot get any other way to put it ... plz let me know if any other way is there ..

 

Tom Kyte
September 02, 2004 - 10:55 am UTC

rather than just deleting this -- again (keeps popping up in threads that have *nothing whatsoever* to do with dbms_job).... I'll say one word:


commit;




A reader, September 02, 2004 - 10:57 am UTC


dba_jobs

Nishant Choudhary, September 03, 2004 - 4:17 am UTC

Sir ,
i m unable to understand .. can you describe it plz ...
( i m not deleting anything )



Tom Kyte
September 03, 2004 - 11:39 am UTC

I deleted -- that is what I said "instead of me just deleting this again".  You posted this (much like here) in a thread that has nothing whatsoever to do with dbms_job.


the one word is "commit"

in session one you did an "update" in effect.  This update is visible to session 1.


in session two, you did a query. This query cannot see uncommitted work in progress by other sessions.


So, the one word is "commit", commit session 1.  Then session 2 will be able to "see" it.


consider the following, the autonomous transactions represent exactly "your other session"


ops$tkyte@ORA9IR2> variable n number
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_job.submit( :n, 'null;', sysdate+1 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select job, what, next_date from user_jobs where job = :n;
 
       JOB WHAT                           NEXT_DATE
---------- ------------------------------ ---------
         2 null;                          04-SEP-04
 
<b>Now you see it...</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          dbms_output.put_line( 'In another transaction....' );
  5          for x in ( select * from user_jobs where job = :n )
  6          loop
  7                  dbms_output.put_line( x.job || ', ' || x.what || x.next_date );
  8          end loop;
  9          dbms_output.put_line( 'Done with another transaction....' );
 10          commit;
 11  end;
 12  /
In another transaction....
Done with another transaction....
 
PL/SQL procedure successfully completed.
 
<b>Now you don't, so COMMIT:</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          dbms_output.put_line( 'In another transaction....' );
  5          for x in ( select * from user_jobs where job = :n )
  6          loop
  7                  dbms_output.put_line( x.job || ', ' || x.what || x.next_date );
  8          end loop;
  9          dbms_output.put_line( 'Done with another transaction....' );
 10          commit;
 11  end;
 12  /
In another transaction....
2, null;04-SEP-04
Done with another transaction....
 
PL/SQL procedure successfully completed.

<b>and now you see it..  Use next date:</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_job.next_date( :n, sysdate+5 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select job, what, next_date from user_jobs where job = :n;
 
       JOB WHAT                           NEXT_DATE
---------- ------------------------------ ---------
         2 null;                          08-SEP-04

<b>and it is changed, but:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          dbms_output.put_line( 'In another transaction....' );
  5          for x in ( select * from user_jobs where job = :n )
  6          loop
  7                  dbms_output.put_line( x.job || ', ' || x.what || x.next_date );
  8          end loop;
  9          dbms_output.put_line( 'Done with another transaction....' );
 10          commit;
 11  end;
 12  /
In another transaction....
2, null;04-SEP-04
Done with another transaction....
 
PL/SQL procedure successfully completed.

<b>No one else can SEE THAT CHANGE, until you use that one word:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          pragma autonomous_transaction;
  3  begin
  4          dbms_output.put_line( 'In another transaction....' );
  5          for x in ( select * from user_jobs where job = :n )
  6          loop
  7                  dbms_output.put_line( x.job || ', ' || x.what || x.next_date );
  8          end loop;
  9          dbms_output.put_line( 'Done with another transaction....' );
 10          commit;
 11  end;
 12  /
In another transaction....
2, null;08-SEP-04
Done with another transaction....
 
PL/SQL procedure successfully completed.


<b>and now we can all see it...</b>
 

Same problem ORA-29285: file write error

Eashwer Iyer, October 12, 2004 - 11:58 am UTC

-- this assumes the passing of the plsql table assembled within UpdateClientSubmitDate function ...
-- Merge the 2 processes and NOCOPY pass ....
DECLARE

CURSOR C_ADT IS
SELECT
adt.rowid adt_rid
FROM
account_detail_transactions adt;


i pls_integer;

type tRid is
table of rowid
index by pls_integer;


vRid tRid; -- this has to be in a persistent memory structure
vRows NUMBER := 1000000;
v_clientsubmit_date DATE := sysdate;
v_rec VARCHAR2(4000);
f1 utl_file.file_type;
k_max_line_size BINARY_INTEGER := 32767;

BEGIN
f1 := utl_file.fopen('/util/lola/output','tmp_fivepercent.dat','w',k_max_line_size);


IF C_ADT%ISOPEN
THEN
CLOSE C_ADT;
END IF;

OPEN C_ADT;
FETCH C_ADT BULK COLLECT INTO vRid LIMIT vRows;

FOR i in 1..vRid.count LOOP
SELECT
adt.transaction_date ||
adt.transaction_amount ||
adt.currency_code ||
adt.external_transaction_id ||
nvl(adt.start_date,adt.TRANSACTION_DATE) ||
nvl(adt.end_date,adt.TRANSACTION_DATE) ||
adt.rate_code ||
pcd.code ||
adt.quantity ||
adt.transaction_amount_local_total ||
pc.code INTO v_rec
FROM
program_codes pc,
portfolio_codes pcd,
account_detail_transactions adt
WHERE
adt.transaction_orig_pc4 = pc.ID (+)
AND adt.ineligible_reason = pcd.ID (+)
AND adt.rowid =vRid(i);


utl_file.put_line(f1,v_rec);
utl_file.fflush(f1);
IF (mod(i,5000) = 0) THEN -- known oracle buffering issue
utl_file.fclose( f1 ); -- close file and reopen in Append mode
f1 := utl_file.fopen('/util/lola/output','tmp_fivepercent.dat','a',k_max_line_size);
END IF;
ENd LOOP;

EXCEPTION
WHEN OTHERS THEN
raise;
IF C_ADT%ISOPEN
THEN
CLOSE C_ADT;
END IF;
utl_file.fclose( f1 );

END;


I tried all the suggestions as you can see (opening and closing every 5000 rows written to file). But it always seems to die at about 889,000 rows.

Tom Kyte
October 12, 2004 - 1:29 pm UTC

  1* select 2*1024*1024*1024/889000 from dual
ops$tkyte@ORA9IR2> /
 
2*1024*1024*1024/889000
-----------------------
             2415.61715



how big are your average rows there?  perhaps you are exceeding an OS limit at this point?

(also, why the "get rowids", run another query???? this should be a single query, period) 

This Helped out quite a bit

Richard Murnane, October 20, 2004 - 5:00 pm UTC

However, I am hitting BUG 2546782 (9iv2). Basically I am trying to write out a binary file that I have uploaded and I can't because UTL_FILE.PUT_RAW writes wrong number of bytes. According to metalink, ~It has been fixed in 10i where the new file open modes like "RB", "WB", "AB" are introduced. If the binary file is opened in these modes then the <CR><LF> characters will not be appended to the file.~

Getting Mio's code to work

Natasha, October 21, 2004 - 3:38 pm UTC

Hi, I have got several thousand attachments stored as blobs in an Oracle 8.1.7 instance. They need to be written to files. Some of the attachments span multiple rows (ie, first chunk of data is a blob in one rowid, second chunk is in a separate row, etc.)

I have got two questions. I realize Tom probably doesn't cover #1 but maybe a reader could help me.

1) A java util like the one Mio posted would be a lifesaver for this case. I could adapt it myself to handle the row spanning. I got Mio's code to compile and execute. But, it produces files that are much shorter than they should be (742 bytes vs. 2595, etc), and unreadable. Does anyone know how to correct this, or how I might even troubleshoot it? I am running it with Java 1.4.1_07 on Windows.
Incidentally, I found another pathway below that also works, but produces garbled files.

2) Would it be worth it to move the data to an Oracle 9 or 10 instance instead uaing exp, and then use one of the packages discussed in this page to write the blobs to files? could that work for the multiple-row attachments, or even just the single-rows?

Thanks much,
Natasha


From:
</code> http://www.oracle-base.com/articles/8i/ExportBlob.php <code>

CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler" AS
import java.lang.*;
import java.sql.*;
import oracle.sql.*;
import java.io.*;

public class BlobHandler
{
public static void ExportBlob(String myFile, BLOB myBlob) throws Exception
{
// Bind the object to the database object
// Open streams for the output file and the blob
File binaryFile = new File(myFile);
FileOutputStream outStream = new FileOutputStream(binaryFile);
InputStream inStream = myBlob.getBinaryStream();

// Get the optimum buffer size and use this to create the read/write buffer
int size = myBlob.getBufferSize();
byte[] buffer = new byte[size];
int length = -1;

// Transfer the data
while ((length = inStream.read(buffer)) != -1)
{
outStream.write(buffer, 0, length);
outStream.flush();
}
// Close everything down
inStream.close();
outStream.close();
}
};
/

ALTER java source "BlobHandler" compile;
show errors java source "BlobHandler"

CREATE OR REPLACE PROCEDURE ExportBlob (p_file IN VARCHAR2,
p_blob IN BLOB)
AS LANGUAGE JAVA
NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
/

EXEC Dbms_Java.Grant_Permission( -
'USER', -
'SYS:java.io.FilePermission', -
'C:\temp\*', -
'read,write,execute,delete');

EXEC dbms_java.grant_permission('USER','SYS:java.lang.RuntimePermission','writeFileDescriptor', null);
EXEC dbms_java.grant_permission('USER','SYS:java.lang.RuntimePermission','readFileDescriptor', null);
EXEC dbms_java.grant_permission('USER','SYS:java.util.PropertyPermission','C:\test\*', 'read,write');

DECLARE
v_blob BLOB;
cursor c1 is select detail from blobtable where row_id = 1006;
BEGIN
open c1;
fetch c1 into v_blob;
ExportBlob('c:\temp\test.txt',v_blob);
close c1;

END;
/


Tom Kyte
October 22, 2004 - 3:00 pm UTC

you've got a single blob stored in multiple rows? arg, why why why would you do that.

if you fix that one problem -- put the blobs that belong together TOGETHER (get rid of the multi-line stuff) you could just ftp them right out of your database as shown above.

Thanks

A reader, November 19, 2004 - 1:53 pm UTC

Thanks so much for the response!!

I couldn't agree more, spanning single blobs across multiple rows is @&#%?!!. That's the way the data is stored by an evil legacy app. We brought in a new, better app and it is my job to extract / import the legacy data.

The ftp method is really cool. I will use it on the non-rowspanned blobs.

dmbs_lob vs. JDBC get/put

Andy, November 24, 2004 - 9:48 am UTC

Just curious, Tom, as to whether dbms_lob is to be preferred over JDBC getCLOB + putString. I'm using the Oracle 9i JDBC Developer's Guide (which only mentions dbms_lob in passing), though I notice from Expert 1-on-1 that you have examples using dbms_lob. Does it just come down to ease of coding, or is there an inherent difference in the way they function?

Tom Kyte
November 24, 2004 - 10:18 am UTC

Expert one on one Oracle does not have very much java in it, the JDBC developers guide has gobs of it.

I work in PLSQL -- hence dbms_lob is my interface to a lob.

In java, it would be more "natural" to use a stream.

Use of FTP Package

Jignesh, November 29, 2004 - 2:08 am UTC

Hi Tom,

The requirement is same... to get the data from BLOB and store it into some external file location.  The server is Windows 2000..... The database is 9i.  Please note that I have created the package FTP in Scott schema.  

When I use this I am getting Error :

 declare
   l_conn  UTL_TCP.connection;
 BEGIN
   l_conn := ftp.login('rac192', '21', 'scott', 'tiger');
 end;

SQL> /
declare
*
ERROR at line 1:
ORA-29260: network error: TNS:no listener
ORA-06512: at "SYS.UTL_TCP", line 410
ORA-06512: at "SYS.UTL_TCP", line 252
ORA-06512: at "SCOTT.FTP", line 39
ORA-06512: at line 4

--- This is the status of listener :

LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 29-NOV-2004 12:33:55

Copyright (c) 1991, 2002, Oracle Corporation.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 - Production
Start Date                29-NOV-2004 10:29:04
Uptime                    0 days 2 hr. 4 min. 50 sec
Trace Level               off
Security                  OFF
SNMP                      OFF
Listener Parameter File   C:\oracle\ora92\network\admin\listener.ora
Listener Log File         C:\oracle\ora92\network\log\listener.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC0ipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac192.atl.mdc.apar.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac192.atl.mdc.apar.com)(PORT=8080))(Presentation=HTTP)(Session=RAW))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac192.atl.mdc.apar.com)(PORT=2100))(Presentation=FTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "ora92" has 2 instance(s).
  Instance "ora92", status UNKNOWN, has 1 handler(s) for this service...
  Instance "ora92", status READY, has 1 handler(s) for this service...
Service "ora92XDB" has 1 instance(s).
  Instance "ora92", status READY, has 1 handler(s) for this service...
The command completed successfully
-----------------------------------

Could you please tell me where I am going wrong?
 

Tom Kyte
November 29, 2004 - 8:13 am UTC

ftp to that machine and show us ftp works, the "listener" in this case is the ftp server

DBA_JOBS

Murali, December 24, 2004 - 3:26 am UTC

Tom,

I am trying to access dba_jobs from a stored procedure and I get "ORA-00942: table or view does not exist". But when I do a "select * from dba_jobs" I get the desired results. What could be the problem?

Regards,
Murali

Tom Kyte
December 24, 2004 - 9:02 am UTC

DBA_JOBS

Murali, December 26, 2004 - 11:05 pm UTC

Hello Tom,
But I am able to do a select * from dba_jobs from sqlplus but am not able to compile any procedure or view having dba_jobs as a reference.

Could you please help me out of this problem?

Regards,
Murali

Tom Kyte
December 27, 2004 - 9:53 am UTC

Query

Muhammad Riaz Shahid, February 10, 2005 - 5:45 am UTC

Hello Tom !

we have a table with structure like:

create table mytab( a varchar2(10), pic blob);

and images are stored in this table. My problem is how to get this image displayed in Oracle Form ???

I am using Oracle 9i Rel 2 EE and Developer 6i.

Regards

Tom Kyte
February 11, 2005 - 2:56 am UTC

for forms questions, please use otn.oracle.com -> discussion forums. there is one for developer (forms/reports).

i haven't touched forms since march 1995 and never have run reports.

BLOB TO FILE and viceversa

Robert Hanrahan, February 25, 2005 - 10:41 am UTC

hi all,

after reading the pages regarding the BLOBS I finally
made a package called "BLOBIT" ;) which works... 
thanks to Tom and the readers:

-- create a directory in the database called "MY_FILES"

CREATE OR REPLACE DIRECTORY MY_FILES AS '/app/oracle/admin/FMS9_C/utl';
GRANT READ, WRITE ON DIRECTORY  MY_FILES TO CFMS9 WITH GRANT OPTION;
GRANT READ, WRITE ON DIRECTORY  MY_FILES TO PUBLIC WITH GRANT OPTION;

-- create a table that holds a BLOB column:

CREATE TABLE DEMO
(
  ID           INTEGER,
  DESCRIPTION  VARCHAR2(100 BYTE)               DEFAULT 'description:',
  THEBLOB      BLOB
)
TABLESPACE FAXOUT2
PCTUSED    40
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          10M
            NEXT             10M
            MINEXTENTS       1
            MAXEXTENTS       505
            PCTINCREASE      50
            FREELISTS        1
            FREELIST GROUPS  1
            BUFFER_POOL      DEFAULT
           )
LOGGING 
  LOB (THEBLOB) STORE AS 
      ( TABLESPACE  FAXOUT2 
        ENABLE      STORAGE IN ROW
        CHUNK       8192
        PCTVERSION  10
        NOCACHE
        STORAGE    (
                    INITIAL          10M
                    NEXT             10M
                    MINEXTENTS       1
                    MAXEXTENTS       505
                    PCTINCREASE      50
                    FREELISTS        1
                    FREELIST GROUPS  1
                    BUFFER_POOL      DEFAULT
                   )
      )
NOCACHE
NOPARALLEL;

ALTER TABLE DEMO ADD (
  PRIMARY KEY (ID)
    USING INDEX 
    TABLESPACE FAXOUT2
    PCTFREE    10
    INITRANS   2
    MAXTRANS   255
    STORAGE    (
                INITIAL          10M
                NEXT             10M
                MINEXTENTS       1
                MAXEXTENTS       505
                PCTINCREASE      50
                FREELISTS        1
                FREELIST GROUPS  1
               ));

CREATE OR REPLACE PUBLIC SYNONYM DEMO FOR DEMO;

-- package "BLOBIT"

CREATE OR REPLACE package BLOBIT as

/*
|| Package to manage BLOBS - rjh 2005
|| 1: load_a_file  - loads a binary file into BLOB column of table
|| 2: save_to_file - saves a BLOB from table to a file
|| rjh 2005
*/

procedure load_a_file  ( p_id in number, p_filename in varchar2 );
procedure save_to_file ( vid in integer, p_filename in varchar2 );

end BLOBIT;
/


CREATE OR REPLACE package body BLOBIT AS

procedure LOAD_A_FILE
( p_id in number, p_filename in varchar2 )
  as
      l_clob     blob;
      l_bfile    bfile;
  begin
      insert into demo values ( p_id, default, empty_blob() )
      returning theblob into l_clob;
      l_bfile := bfilename( 'MY_FILES', p_filename );
      dbms_lob.fileopen( l_bfile );
      dbms_lob.loadfromfile( l_clob, l_bfile, dbms_lob.getlength( l_bfile ) );
      dbms_lob.fileclose( l_bfile );

  commit;

end LOAD_A_FILE;

procedure SAVE_TO_FILE
( vid in integer, p_filename in varchar2 )
  as

    vblob blob;
    i2 number;
    amt number := 10000;
    len number;
    my_vr raw(10000);
    l_output utl_file.file_type;
    p_dir varchar2(30) default 'MY_FILES';
    p_file varchar2(30) := p_filename;
begin

    l_output := utl_file.fopen(p_dir, p_file, 'w', 32760);
    for l_cur in (SELECT theblob mylob FROM demo where id = vid)
    loop

        len :=  DBMS_LOB.GETLENGTH(l_cur.mylob);
        vblob := l_cur.mylob ;
        dbms_output.put_line('Length of the Column :  ' || to_char(len));
        i2 := 1;
        if len < 10000 then
        DBMS_LOB.READ(vblob,len,i2,my_vr);
        utl_file.put(l_output, my_vr );
        else
            DBMS_LOB.READ(vblob,amt,i2,my_vr);
            utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
        end if;

        i2 := i2 + amt;
        while (i2 < len) loop
        dbms_output.put_line('i2 : ' || to_char(i2));
        DBMS_LOB.READ(vblob,amt,i2,my_vr);
        utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
        utl_file.fflush(l_output);
        i2 := i2 + amt ;

    end loop;
    utl_file.fclose(l_output);
    end loop;

   commit;

end SAVE_TO_FILE;
end;
/

grant execute on BLOBIT to public;
create or replace public synonym BLOBIT for BLOBIT;

-- usage: put a binary file in the directory:
-- cp binary_file.pdf /app/oracle/admin/FMS9_C/utl/binary_file.pdf
-- put the pdf inside the BLOB column:

SQL> exec BLOBIT.load_a_file(1,'binary_file.pdf');

-- retrieve the BLOB back to the file, delete the file in the filesystem:
-- rm /app/oracle/admin/FMS9_C/utl/binary_file.pdf

SQL> exec BLOBIT.save_to_file(1,'binary_file.pdf');

-- now, you have the file out of the database...
-- cheers

Robert Hanrahan
http://rjh.keybit.net/index2.html

Where can i Get initplsj.sql in Oracle?

Sujit, March 03, 2005 - 6:33 am UTC

Hi,

I am using UTL_SMTP package to send mails. I have created the procedure in 8i. As i found from Web-sites that "@$ORACLE_HOME/rdbms/admin/initplsj.sql" is necessary to run in DB before we can use UTL_SMTP package. I ran the sql from the path specified and could use it.

However in a different system with Oracle 9i (9.2.0.4) i am unable to find the file "initplsj.sql" in "@$ORACLE_HOME/rdbms/admin/" folder.


My question is,
1. Does Oracle 9i has the file in same name or some different name?
2. Is the Oracle folder we r trying is in-complete ? i.e. is the file actually should have been there but missing?

Thanx.

Tom Kyte
March 03, 2005 - 7:57 am UTC

that was a mistake in 8i that you had to. it is done automagically in later releases.

Is initjvm.sql required to be run in 9i for enabling UTL_SMTP

Abhay, March 18, 2005 - 7:02 am UTC

In 8i both initjvm.sql and initplsj.sql were required to be run as step 1 and step 2 to enable emails.

You specified in response to a question that initplsj.sql is not required to be run explicitly in 9i.

Could you please advise whether we need to run $ORACLE_HOME/javavm/install/initjvm.sql also or is it done automatically too.


Tom Kyte
March 18, 2005 - 7:13 am UTC

is java already installed during your database create (you picked whether to have it or not)

if so, no need

if not, that installs it

Select Distinct CLOB from table

Cen, March 19, 2005 - 12:27 pm UTC

select distinct dbms_lob.substr(INFO) "INFO"
from A

gives the error

ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at line 1

How to resolve this

Tom Kyte
March 19, 2005 - 12:58 pm UTC

you can only do 4000 characters at most there.

select distinct dbms_lob.substr(info,4000,1) info from a;


would work, but nothing bigger than 4000

What's the solution

Cen, March 19, 2005 - 8:52 pm UTC

yeah I find out this it supports only for 4000 characters . But then what will be the solution if i want to do for more that 4000???

Thanks for your quick answer

Tom Kyte
March 20, 2005 - 7:31 am UTC

up to 32k you could create a md5 checksum and distinct them (dbms_obfuscation_toolkit)

beyond that -- distincting of megabyte sized documents will be "problematic", you could repeatedly call md5 to get checksums of checksums plus the next 32000 bytes and then distinct that - but make sure you have plenty of "time"

Save blob to file

A reader, May 13, 2005 - 2:56 pm UTC

Hi Tom,

I followed the procedure as specified above to write a blob to a file using utl_raw.put_raw. What should I save the file as and how should I open it to see if the image is displaying correctly. Now I have saved the file as .jpg and when I open it I am seeing a tiny red arrow mark (indicating invalid image). Pls help.

Thanks.

Tom Kyte
May 13, 2005 - 3:03 pm UTC

save the file "as" anything, that is up to you.

if you compare it to the source file, is it the same size?

A reader, May 13, 2005 - 3:12 pm UTC

Yes the length is the same. But if I want to visually see the image how can I do it?


Tom Kyte
May 13, 2005 - 4:22 pm UTC

double click on it? open it in an image viewer?

same way you view the original

amit poddar, June 02, 2005 - 10:16 am UTC

Hi,

I am trying to write blob data into the file using utl_file.put_raw

I am successfull till the filesize is 32767 after that I get Plsql numeric or value error. I have stripped down the procedure to its bare minimum when it gives the error

  1  select file_id,file_ticket_id,file_size from pt5_file where file_ticket_id=2979
  2* order by file_size
SQL> /

   FILE_ID FILE_TICKET_ID  FILE_SIZE
---------- -------------- ----------
       753           2979      24576
       755           2979      32256
       756           2979      35328
       754           2979      43520


SQL> declare
  2      len number;
  3  begin
  4      for l_cur in (SELECT file_data mylob
  5                      FROM pt5_file
  6                     where file_ticket_id=2979
  7                     order by file_size)
  8      loop
  9         len :=  dbms_lob.getlength(l_cur.mylob);
 10         dbms_output.put_line('length= '||len);
 11      end loop;
 12  end;
 13  /
length= 24576
length= 32256
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 4


SQL> 

Can you tell me if I am doing something wrong ? 

Tom Kyte
June 02, 2005 - 5:06 pm UTC

not sure I understand.   need help to reproduce.



ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x clob );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          l_clob clob;
  3  begin
  4          insert into t values ( empty_clob() ) returning x into l_clob;
  5          dbms_lob.writeAppend( l_clob, 30000, rpad('x',30000,'x') );
  6          dbms_lob.writeAppend( l_clob,  5328, rpad('x',5328,'x') );
  7          commit;
  8
  9          for x in ( select * from t )
 10          loop
 11                  dbms_output.put_line( dbms_lob.getlength(x.x) );
 12          end loop;
 13  end;
 14  /
35328
 
PL/SQL procedure successfully completed.
 
 

Storing the file into a blob column

Vijay, June 06, 2005 - 4:51 am UTC

Hi Tom,

What is wrong with the below code? Am i missing something?

create or replace directory MY_FILES as 'c:\Test';

Grant all on directory MY_FILES to public with grant option;

CREATE TABLE DEMO
(
ID INTEGER,
DESCRIPTION VARCHAR2(100 BYTE) DEFAULT 'description:',
THEBLOB BLOB
);

create or replace procedure LOAD_A_FILE (p_id in number, p_filename in varchar2 )
as
l_clob blob;
l_bfile bfile;
l_dir CONSTANT VARCHAR2(30) := 'MY_FILES';
l_file CONSTANT VARCHAR2(30) := 'test.txt';
l_ret INTEGER := 0;
begin
insert into demo values ( p_id, 'Sample', empty_blob() )
returning theblob into l_clob;
l_bfile := bfilename(l_dir, l_file);
begin
/* l_ret := DBMS_LOB.FILEEXISTS(l_bfile);
if l_ret = 1 then*/
dbms_lob.fileopen( l_bfile, DBMS_LOB.LOB_READONLY);
dbms_lob.loadfromfile( l_clob, l_bfile, dbms_lob.getlength( l_bfile ) );
commit;
/* ELSE
dbms_output.put_line('File does not exist');
dbms_output.put_line('Error ' || SQLCODE || SQLERRM);
END IF;*/
exception
when others then
dbms_output.put_line(SQLCODE || SQLERRM);
end;
if dbms_lob.fileisopen(l_bfile) = 1 then
dbms_output.put_line('File opened');
dbms_lob.fileclose( l_bfile );
else
dbms_output.put_line('File not opened');
end if;
end LOAD_A_FILE;

When I execute the procedure I am getting the below error,

-22288ORA-22288: file or LOB operation FILEOPEN failed
LFI-00108: Open expect the file [test][txt]
to exist.

Tom Kyte
June 06, 2005 - 7:30 am UTC

seems that the file c:\test\test.txt does not exist on the DATABASE SERVER machine.

File Exists

Vijay, June 07, 2005 - 8:15 am UTC

No, I have created the directory and the test file.

Tom Kyte
June 07, 2005 - 8:30 am UTC

ON the DATABASE server.

ORA-22288: file or LOB operation FILEOPEN failed
LFI-00108: Open expect the file [test][txt]
to exist.

the error is fairly straightforward.

Can you "prove" this. On the SERVER itself cut and paste the "dir" listing that shows the file, then using sqlplus.exe log into the local database, run a procedure that does NOTHING other than tries to open the file (eg: a one line begin/end block of code)

Problem solved

Vijay, June 07, 2005 - 8:37 am UTC

Thanks tom, I have created the file under database server and now it is working. Many thanks for your reply.

how about for clobs?

A reader, June 20, 2005 - 10:02 am UTC

I have succesfully run the dump_csv for regular datatypes in Oracle8i is it possible to slighty modify this to work for the CLOB datatype?

Tom Kyte
June 20, 2005 - 10:45 am UTC

you've got the code, you can make it do anything.

Yes, you can make it work with clobs. you would use the output of the dbms_sql describe routine to detect a clob and bind to it, retrieve it and use dbms_lob.read or dbms_lob.substr to print it out by piece.

What determines....

CG, June 21, 2005 - 3:43 pm UTC

..... the line size of the xml file that I create from the following code:

declare
queryCtx DBMS_XMLquery.ctxType;
result CLOB;
len INTEGER;
l_buff varchar2(4000);
amt BINARY_INTEGER := 4000;
offst INTEGER := 1;
file_not_found EXCEPTION;
clob_file_handle utl_file.file_type;
pragma exception_init(file_not_found, -302000);
begin

dbms_output.put_line('-- set up the query context...!');
queryCtx := DBMS_XMLQuery.newContext('select * from sams.personnel');

dbms_output.put_line('-- get the result..!');
result := DBMS_XMLQuery.getXML(queryCtx);
-- Now you can use the result to put it in tables/send as messages..
clob_file_handle := utl_file.fopen('c:\wip','myclob.xml','w', amt);
dbms_output.put_line('....The file is opened');
len := dbms_lob.getlength(result);
dbms_output.put_line('...The length of the clob is '||len);
if len > amt then
while ( len > offst ) loop
dbms_output.put_line('....Inside the code');
dbms_lob.read( result, amt, offst, l_buff);
utl_file.put_line(clob_file_handle, l_buff );
offst := offst + amt;
utl_file.fflush(clob_file_handle);
dbms_output.put_line('...The buffer is flushed');
end loop;
else
dbms_output.put_line('In the else statement');
dbms_lob.read( result, len, offst, l_buff);
utl_file.fflush(clob_file_handle);
end if;

utl_file.fclose(clob_file_handle);
dbms_output.put_line('The file is closed');
DBMS_XMLQuery.closeContext(queryCtx); -- you must close the query handle..
EXCEPTION
WHEN file_not_found THEN
IF UTL_FILE.IS_OPEN(clob_file_handle) THEN
UTL_FILE.FCLOSE(clob_file_handle);
END IF;
WHEN UTL_FILE.INVALID_PATH THEN
UTL_FILE.FCLOSE(clob_file_handle);
RAISE_APPLICATION_ERROR(-20020, 'Invalid Path');
WHEN UTL_FILE.INVALID_MODE THEN
UTL_FILE.FCLOSE(clob_file_handle);
RAISE_APPLICATION_ERROR(-20030, 'Invalid File Mode');
WHEN UTL_FILE.INVALID_OPERATION THEN
UTL_FILE.FCLOSE(clob_file_handle);
RAISE_APPLICATION_ERROR(-20040, 'Invalid File Operation');
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('End of data');
end;
/

in my file I have lines like this:
.....<snipped for brevity>
<ROW num="11">
<BIRTH_DATE>1/1/1950 0:0:0</BIRTH_DATE>
<BLOOD_ABO>U</BLOOD_ABO>
<BLOOD_RH>U</BLOOD_RH>
<CMD_SEQ>860</CMD_SEQ>
<DEPT_SEQ>880</DEPT_SEQ>
<DIV_SEQ>881</DIV_SEQ>
<DUTY_CODE>N11</DUTY_CODE>
<FLIGHT_STATUS_IND>N</FLIGHT_STATUS_IND>
<FMPC_ID>20</FMPC_ID>
<FNAME>KNUCKLE</FNAME>
<GENDER>M</GENDER>
<HIGH_RISK_MAM_IND>N</HIGH_RISK_MAM_IND>
<HISPANIC_LATINO_IND>Y</HISPANIC_LATINO_IND>
<LMP_PROMPT_IND>N</LMP_PROMPT_IND>
<LNAME>DRAGGER</LNAME>
<MED_RE
ADY_STAT>U</MED_READY_STAT>
<PAYGRADE_ID>E04</PAYGRADE_ID>
<PERS_SEQ>882</PERS_SEQ>
.... <snipped for brevity>

as you can see above med_ready_stat element named is cut by a new line.

What causes that? I read your statements that with binary data you will get issues with utl_file. Is that the same for clobs also?

Tom Kyte
June 21, 2005 - 9:02 pm UTC

if len > amt then
while ( len > offst ) loop
dbms_output.put_line('....Inside the code');
dbms_lob.read( result, amt, offst, l_buff);
utl_file.put_line(clob_file_handle, l_buff );
offst := offst + amt;
utl_file.fflush(clob_file_handle);
dbms_output.put_line('...The buffer is flushed');
end loop;

you decided, it is amt -- you are using PUT_LINE, maybe you meant PUT since the xml has newlines already.

Thank You

cg, June 23, 2005 - 2:33 pm UTC

True or False:
I am limited to 4000 byte pieces when reading a clob and writing it to the file system ( via PL/SQL )?

Based on the fact that if I try to write character data to a blob it will "do something" to the data.

dbms_lob.read( if_clob_here, amt, offset, must_have_varchar_here)

I was wondering to see if there was a way to read the maximum 32k for varchar2 instead of l_var varchar2(4000);



Tom Kyte
June 23, 2005 - 7:04 pm UTC

false.

you are limited to 32k in plsql
you are limited to 4000 bytes in sql

Nevermind

cg, June 23, 2005 - 3:33 pm UTC

I thought about what I said and realized my knowledge was not accurate.
varchar2 can be 32000.

ExportBLOB memory leak?

Dan, August 05, 2005 - 12:01 pm UTC

I have the EXPORTBLOB package from above; works fine. The problem is that I am archiving off several million BLOBS (actually PDF files). I noticed that as this ran, the session PGA just keeps growing...and growing... and growing. By the time it had exported 1.3Million, the session PGA was over 570,000,000.

It is called from a PL/SQL procedure that opens a cursor to select from the table (few million rows can be returned), computes the destination file name, and calls this procedure

Here is the relevant loop of the program

for image_rec in image_curs(l_start_date, l_end_date) loop
EXIT WHEN image_curs%notfound;

l_row_count := image_curs%rowcount;
l_length := dbms_lob.getlength(image_rec.stmt_image);

if l_length > 0 then
-- some statements get a useful name
l_filename := null;
if image_rec.stmt_source = 'P' then
l_filename := get_filename1 (
image_rec.outlet_id,
image_rec.queue_date,
image_rec.sequence_id);
else
-- Others get a less useful name, since we know less
-- about it
l_filename := get_filename2 (
image_rec.outlet_id,
image_rec.stmt_process_dt,
image_rec.sequence_id);
end if;

begin
ExportBlob(l_filename,image_rec.stmt_image);
exception
when others then
l_sqlerrm := sqlerrm;
-- If we got the error that the directory was
-- not found, then create the directory and retry
if instr(l_sqlerrm,
'java.io.FileNotFoundException: No such file or directory')> 0
then
make_subdirectory(l_filename);
ExportBlob(l_filename,image_rec.stmt_image);
else
-- Not the "directory not found" error, so
-- propagate the error
raise;
end if;
end;
end if;
end loop;

Here is the exact java and call spec...

CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "exportBLOB" AS
import java.lang.*;
import java.sql.*;
import oracle.sql.*;
import java.io.*;

public class exportBLOB
{
public static void export_it(String myFile, BLOB myBlob) throws Exception
{
// Bind the object to the database object
// Open streams for the output file and the blob
File binaryFile = new File(myFile);
FileOutputStream outStream = new FileOutputStream(binaryFile);
InputStream inStream = myBlob.getBinaryStream();

// Get the optimum buffer size and use this to create the read/write buffer
int size = myBlob.getBufferSize();
byte[] buffer = new byte[size];
int length = -1;

// Transfer the data
while ((length = inStream.read(buffer)) != -1)
{
outStream.write(buffer, 0, length);
outStream.flush();
}

// Close everything down
inStream.close();
outStream.close();
}
};
/

CREATE OR REPLACE procedure exportBLOB (
p_file in varchar2,
p_blob in blob) as
language java name 'exportBLOB.export_it(java.lang.String, oracle.sql.BLOB)';
/

Any suggestions? (also is there a better way to trap the Java error?)

Thanks
Dan

Tom Kyte
August 05, 2005 - 2:17 pm UTC

we need to try and determine "what is leaking", if you comment out the export blob and run this, does it "grow and grow"?

Java Blob - Memory usage

Dan, August 09, 2005 - 3:22 pm UTC

If I remove the call to EXPORTBLOB, the memory usage does not grow significantly (and it runs a LOT faster - too bad it doesn't actually DO anything useful then...)

I suspect that each call to Java is grabbing a new chunk of memory with each call, and not releasing it until the garbage collection runs at the end of the process.

I tried using NOCOPY in the call spec, but that would not compile (which is consistent with the documentation for NOCOPY).

BTW:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

Thanks for any assistance you can provide...





Tom Kyte
August 09, 2005 - 3:46 pm UTC

You'll need to work with support on this - I'm not too experienced with Java personally and don't know if there is a different way to rewrite this.

Any thoughts to "breaking this up" into smaller bits (which could be run in parallel even)



Java BLOB

Dan, August 09, 2005 - 4:45 pm UTC

I have a odd distribution of data... day 1 to day 29 has a total of 20,000 rows, day 30 has 250,000 rows, so the obvious break point of day isn't of much help.

On reading the DBMS_LOB doc's, it said the memory was freed when the block was completed, so I made a small block within the loop...something like this

open cursor
loop
declare
l_blob blob
begin
fetch cursor into l_blob...,
exit when cursor not found
exportblob
end;
end loop;
close cursor;

The idea was that when the block was done (with every loop) the memory for the blob would be freed.

Didn't help - perhaps even slower... (old way processed 10,000 blobs in 2-5 minutes, new way was more like 10 minutes, but this is a busy box).

Thanks for your help


Tom Kyte
August 09, 2005 - 5:45 pm UTC

rowid ranges are good if the table is in multiple extents (the more the better)


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:10498431232211 <code>

Saving resultset into BLOB

Anurag, August 15, 2005 - 6:20 pm UTC

In the past I have successfully saved files using java into blob and even by using bfile.

I am trying to save a sql result set (in pl/sql) separated by commas (as in csv file) into BLOB. I am running into errors when i try to save the result set by using INTO clause inside of select or by using UTL_RAW to convert rseultset into raw format.

Is there a better way of saving the result set into blob?

Sample query

select col_a || ',' || col_b || ',' ||
col_c || ',' || col_d || col_e || ',' ||
col_f
from table_a
where condition;

PS: I am using Oracle 8.1.7 and will migrate the code to 9i in about 2 months.


Tom Kyte
August 15, 2005 - 10:45 pm UTC

if you run that query

and dbms_lob.writeappend the UTL_RAW.CAST_TO_ROW of the fetched string

it should work, assuming you have a blob to write into

but one questions why you put text into a blob.

writeAppend very helpful

Anurag Mehrotra, August 16, 2005 - 11:51 am UTC

This is just one type of data. BLOB will also save word doc, images, pdf files etc, and yes instead of saving excel sheets, we are simply saving the comman separated text, which we can display back on the client machine using appropriate mime-type.


Output of particular column whose datatype is LONG RAW to be displayed in Excel format.

Shirish, September 01, 2005 - 4:13 pm UTC

Hi TOM,

We have inserted data in one particular column whose datatype is LONG RAW. Now, I want to display the contents of this column.

I want to write a visual basic program,whose results with the particular content will be displayed in Excel format.

I am using Oracle Release 9.0.1.0.1

Kindly let me know, how to go about.

Regards
Shirish

Tom Kyte
September 01, 2005 - 4:21 pm UTC

write a VB program that

a) connects to oracle
b) retrieves the blob
c) writes it to a file
d) runs excel against that file.

Output of particular column whose datatype is LONG RAW to be displayed in Excel format

Shirish, September 01, 2005 - 5:51 pm UTC

Hi Tom,

Thank you for your message.
It would be of great help , if you could send the Visual Basic code which
a) connects to oracle
b) retrieves the blob
c) writes it to a file
d) runs excel against that file

Regards
Shirish

Tom Kyte
September 02, 2005 - 1:04 am UTC

ummm, wow.

(you should just use mod_plsql, no code to deploy on client desktop, they just need browser, then I'd write the "code" for you as it would be like 3 or 4 lines of code)


</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96591/toc.htm <code>

there is the doc you need.

FTP through PL/SQL

Dawar Naqvi, September 21, 2005 - 1:22 pm UTC

Tom,

OS: Linux
DB: 10.1.0.3.0

I would like to do the ftp using PL/SQL .

What I know is i can do the FTP using the UTL_TCP and UTL_FILE packages that oracle provides, but in this case requirent is ftp and append "APPEND" to the destination file.


What I want to know is how to do FTP through PL/SQL?

Regards,
Dawar




Tom Kyte
September 21, 2005 - 7:40 pm UTC

you would have to implement that part of the ftp protocol - the text is all in clear text - it is possible for you to do that.

how to retrieve file from db ?

Parag J Patankar, November 23, 2005 - 11:12 am UTC

Hi Tom,

In Oracle 9.2 I have stored word file in "BLOB". My database is in unix. Now I want to retrieve this file in aix, I do not want to display th file. Can you show me how to do this ?

regards & thanks
pjp

Tom Kyte
November 23, 2005 - 7:10 pm UTC

what language are you going to write your program in on the AIX machine.

blob retrieve

Parag J Patankar, November 23, 2005 - 11:39 pm UTC

Hi Tom,

I will prefter to have pl/sql package to retrive blob data into unix as file.

regards & thanks
parag j patankar

Tom Kyte
November 24, 2005 - 8:45 am UTC

plsql runs in the server.

plsql can write to the server.

plsql does not run on the client( aix)

therefore plsql cannot write to the client filesystem (it would be virus like if it could).


hence, you shall have to write a piece of client code on the client that creates this file.

if your aix machine has a ftp server, you can use this technique:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6379798216275#19264891198142 <code>

FTP pl/sql for BLOB

Parag Jayant Patankar, November 29, 2005 - 5:26 am UTC

Hi Tom,

As per link provided by you in Oracle 9.2 database I have created ftp packages and body without any error. When I am trying to run following with '21' ( assuming it is port no ) 
l_conn := ftp.login('10.192.129.200', '21', 'docs', 'docs');
 
I am getting following error


220 mumsa128200 FTP server (Version 4.1 Tue Jul 6 21:20:07 CDT 2004) ready.
331 Password required for docs.
530 Login incorrect.
DECLARE
*
ERROR at line 1:
ORA-20000: 530 Login incorrect.
ORA-06512: at "DOCS.FTP", line 124
ORA-06512: at "DOCS.FTP", line 104
ORA-06512: at "DOCS.FTP", line 50
ORA-06512: at line 7

When I am running with 1521 

DECLARE
    l_conn  UTL_TCP.connection;
    l_blob  blob;
BEGIN
    select filesave into l_blob from documents where id = 1;
 
    l_conn := ftp.login('10.192.129.200', '1521', 'docs', 'docs');
 
    ftp.binary(p_conn => l_conn);
 
    ftp.put_remote_binary_data
           (p_conn      => l_conn,
            p_file      => '/tmp/foo.doc',
            p_data      => l_blob );
    ftp.logout(l_conn);
    utl_tcp.close_all_connections;
END;
/

This is just runnning and running ( hanging ). But on server nothing is happening on server in /tmp direcotry.

  1* select * from all_directories where directory_name='MY_DOCS'
15:40:31 SQL> /

OWNER                          DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
-----------------------------------------------------------------
SYS                            MY_DOCS
/tmp


Kindly guide me to solve the problem.

regards & thanks
pjp 

Tom Kyte
November 29, 2005 - 10:24 am UTC

your login is wrong??


why would you try to FTP to an Oracle TNS Listener??????


this is just ftp, if you can ftp to that server and connect as docs/docs, this will work.

If you cannot ftp to that server, this will not.

UTL_FILE

kamesh, December 08, 2005 - 11:26 am UTC

Hi Tom,

My day does not pass without reading the queries posted in your site. I am very impressed with the way you answer the questions. It gives great insight into the subject. Today, I am here to ask you a question which has been haunting me for a while. We have a requirement of writing the data collected from tables into a file(SQL query is written to get the data we want and the resultset is read to fetch the values and written to a file). The reason why we are writing to a file is that we will use this csv file to load the data into other databases(SQLserver,oracle(8i-9i) or could be anythting else). We deal with spatial data. Presently we have a java class which makes JDBC calls to execute a procedure in order to get the results. These results are then read row by row and written to a file. We have blob fields in the database. We are using file i/0 provided by java for this purpose. We use buffered streams as the i/o activity would be reduced. Our aim is to get the results very fast. I was thinking would it be good to use UTL_FILE provided by oracle for this purpose? would the response time be fast with this? Please explain incase you came across such a situation. How did you go about doing it?

Thanks,
kamesh

Tom Kyte
December 08, 2005 - 12:04 pm UTC

I doubt utl_file would be a good match for writing high volumes of data out to files.

I have unload routines in

plsql
sqlplus
C

they operate speedwise in this order

plsql
sqlplus
C

from slow to fast...

Unload routines

Kamesh, December 08, 2005 - 12:53 pm UTC

Could you kindly send me those routines that you have. For C routines in what mode do you open the file, the input stream is it character or buffered? There should be some place where you buffer the whole data and write to the file is it not? Otherwise, reading 1 row and writing to file would be very time consuming is it not? If it is possible to send the C or java code it would be great.

Tom Kyte
December 09, 2005 - 1:13 am UTC

</code> http://asktom.oracle.com/~tkyte/flat/index.html <code>

I just use good ole stdout.

A reader, December 08, 2005 - 7:48 pm UTC

Kamesh,

Do a search on unloading data and you will find lots of stuff on this site.

unique index on blobs

A reader, April 05, 2006 - 8:27 am UTC

Hi

When we define an unique index on a blob field what is checked for duplicates? The hex, checksum or?



Tom Kyte
April 05, 2006 - 6:08 pm UTC

show me a unique index on a blob field first :)

ops$tkyte@ORA9IR2> create table t ( x blob );

Table created.

ops$tkyte@ORA9IR2> create unique index t_idx on t(x);
create unique index t_idx on t(x)
                               *
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype LOB

 

it is a Oracle Application Server Table

A reader, April 06, 2006 - 11:05 am UTC

SQL> select owner, index_name, index_type, uniqueness from dba_indexes where owner = 'DCM'
  2  and index_type = 'LOB';

OWNER                          INDEX_NAME                     INDEX_TYPE                  UNIQUENES
------------------------------ ------------------------------ --------------------------- ---------
DCM                            SYS_IL0000049837C00007$$       LOB                         UNIQUE


select dbms_metadata.get_ddl('INDEX', 'SYS_IL0000049837C00007$$', 'DCM') from dual;

  CREATE UNIQUE INDEX "DCM"."SYS_IL0000049837C00007$$" ON "DCM"."PM_OBJECTS" (
  PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "DCM"
  PARALLEL (DEGREE 0 INSTANCES 0)


 

Tom Kyte
April 07, 2006 - 3:57 pm UTC

that is not an index on a lob, that is a lob index (the index that gets to the lob segment)

it is like the primary key of the lob chunks (like - not exactly the same)




what is a lob index for?

A reader, April 08, 2006 - 9:22 am UTC

Hi

I tried to use that syntax to create an index but it fails complaining about missing ')'.

I was trying to understand what is lob index for?


Tom Kyte
April 08, 2006 - 9:55 am UTC

it is an internal index WE use to find the chunks of your lob.

<quote src=expert oracle database architecture>

This implies there is a lot going on in the background with LOBs, and in fact there is. A LOB column always results in what I call a multisegment object, meaning the table will use multiple physical segments. If we had created that table in an empty schema, we would discover the following:

ops$tkyte@ORA10G> select segment_name, segment_type
  2  from user_segments;
 
SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------
SYS_C0011927                   INDEX
SYS_IL0000071432C00002$$       LOBINDEX
SYS_LOB0000071432C00002$$      LOBSEGMENT
T                              TABLE

An index was created in support of the primary key constraint—that is normal—but what about the other two segments, the lobindex and the lobsegment? Those were created in support of our LOB column. The LOBSEGMENT is where our actual data will be stored (well, it might be stored in the table T also, but we’ll cover that in more detail when we get to the ENABLE STORAGE IN ROW clause). The LOBINDEX is used to navigate our LOB, to find the pieces of it. When we create a LOB column, in general what is stored in the row is a pointer, or LOB locator. This LOB locator is what our application retrieves. When we ask for “bytes 1,000 through 2,000” of the LOB, the LOB locator is used against the LOBINDEX to find where those bytes are stored, and then the LOBSEGMENT is accessed. The LOBINDEX is used to find the pieces of the LOB easily. You can think of a LOB then as a master/detail sort of relation. A LOB is stored in “chunks” or pieces, and any piece is accessible to us. If we were to implement a LOB using just tables, for example, we might do so as follows:

Create table parent 
( id int primary key, 
  other-dataÂ… 
);

Create table lob 
( id references parent on delete cascade, 
  chunk_number int, 
  data <datatype>(n), 
  primary key (id,chunk_number) 
);

Conceptually, the LOB is stored very much like that—in creating those two tables, we would have primary key on the BLOB table on the ID,CHUNK_NUMBER (analogous to the LOBINDEX created by Oracle), and we would have a table BLOB storing the chunks of data (analogous to the LOBSEGMENT). The LOB column implements this master/detail structure for us transparently. 

</quote> 

filename

mal, May 10, 2006 - 6:29 pm UTC

Tom,

How to display filename only from file_name string?
/export/u01/oradata/AAA/tools01.dbf

I need only tools01.dbf?
What will be the sql?


Export BLOB Contents to File

Shital, May 13, 2006 - 7:32 am UTC

hi !!
I tried ur Java stored procedur "ExportBlob"
it is working well on my side & vvery useful to me.

I want to modify that procedure little bit can u pl help as
I am new to Java

Your procedur -
public static void ExportBlob(String myFile, BLOB myBlob) throws Exception
{
File binaryFile = new File(myFile);
FileOutputStream outStream = new FileOutputStream(binaryFile);

----
--}

now instead of sending full path file name as a input I want to send only file name & use location as "Image"
I created directory alias as

Create Directory Image AS 'C:\MyPhoto'"


can u pl tell me how should I use Directory alias in Java stored procedure?






Tom Kyte
May 13, 2006 - 9:09 am UTC

"U" isn't available. The requests for their services seem to be increasing over time.

Do YOU know how to contact this U person?


You don't use a directory alias in a Java stored procedure, you just use the name

... new File( "c:\myphoto\" + myFile );



A reader, May 14, 2006 - 8:09 am UTC

Tom,

Maybe you should put a message on the main page mentioning no IM lingos or whatever as there are lot of new folks visiting every day (just a thought).

Thanks.
Rahul.

Tom Kyte
May 14, 2006 - 9:03 am UTC

I sort of figured by making it big and bold on the page you use to atually input comments would be more sensible - people have to see that one in order to put a comment here.

Yet, it does not stop them...

Not anymore than the CHECKLIST I make people read, and check off on (signifying "i've read it") from posting questions that conflict with many of the checklist items!

Data Field

Ram, June 21, 2006 - 2:40 pm UTC

Dear Tom,
I am in the planning stage of converting SQL2000 batch process to Oracle9i.
SQL2000 batch stored proc.does several calcs. and finally prepares a flat file and INSERTS INTO a Table.C1 of varchar(4000). Flat file width never exceeded 4000 but the number of lines in the flat file may be near 4000 lines. There is another stored proc. which simply selects from Table.C1 and output to VB app for formatting and display. This is in workig condition.
My quesitions
(1)Is there any limit on how much a data field can grow vertically..Meaning can I INSERT 1000+lines of flat file into ONE Column?The entire flat file is considered as Data.
Ex:
124848488499999999999999999999999444444444444444444444444
Hellow Tom You are the best
My name is Ram from Harrisburg Pennsylvania USA
----
----
End of file
(2)Is it a good design.
(3) Please sugggest any alternatives.

Thanks



Tom Kyte
June 22, 2006 - 11:41 am UTC

tables in Oracle may have billions and billions of rows.

as to whether this is a good design - no idea, don't know what the application needs to do with the data.

ftp.pkb works !

Vasile Stavarache, August 08, 2006 - 5:55 pm UTC

Hi Tom,
I just want to tell you I tested the solution using ftp.pkb as suggested by you. I still do testing with different file formats and sizes but it's ok for now.
Thanks!



PLSQL FTP_INTERFACE solution

Barry C, September 09, 2006 - 12:38 pm UTC

Learning a lot from other resources, I have a relatively complete FTP solution that might benefit many.

FTP_INTERFACE v.3.1.0
Req: 9iR2 +

* Local <=> Remote operations
* LOB <=> Remote operations
* Session Timeout Control automatically kills dead server connections
* Server Type identification
* RMDIR/MKDIR support for remote server
* LS/DIR commands to provide directory listing retrieval
* Oracle Applications integration
* Mainframe Compatible
* Binary/Ascii support

You can get it from </code> http://www.myoracleportal.com <code>

It is Open Source and PLSQL based

Leo, October 25, 2006 - 5:57 pm UTC

Hi Tom,

I'm new to working with Blobs and from looking at your site and Oracle Forums I created two procedures below. I can successfully insert a file (pdf, size 2453kb) to the blob but when I try to retrieve it back to a file I receive this error:

ORA-29285: file write error
ORA-06512: at "SYS.UTL_FILE", line 140
ORA-06512: at "SYS.UTL_FILE", line 379
ORA-06512: at "SYS.UTL_FILE", line 1062
ORA-06512: at "METADATA.GET_LOB", line 135
ORA-06512: at line 7 .

And the dbms_output looks something like this:

Column Length: 2510880
Read 32767 Bytes
Read 65534 Bytes
:
:
Read 2359224 Bytes
Read 2391991 Bytes
Read 2424758 Bytes

Do you know why IÂ’m getting this error and how can I fix it.

Thanks in advance for your feedback.

Leo

CREATE TABLE DEMO
(
TK INTEGER NULL,
THEBLOB BLOB NULL,
FILE_NAME VARCHAR2(100 BYTE) NULL,
FILE_EXT VARCHAR2(3 BYTE) NULL
)

PROCEDURE FILE_TO_BLOB( p_filename IN VARCHAR2 )
AS
v_blob BLOB;
v_bfile BFILE;
v_file_name VARCHAR2(100);
v_file_ext VARCHAR2(3);
BEGIN
--file name without extention
v_file_name := SUBSTR( p_filename, 1, INSTR( p_filename, '.')-1);
--extention of file
v_file_ext := SUBSTR( p_filename, INSTR( p_filename, '.')+1, 5);

INSERT INTO demo (theblob, file_name, file_ext)
VALUES ( EMPTY_BLOB(), v_file_name, v_file_ext)
RETURNING theBlob INTO v_blob;

v_bfile := BFILENAME( 'METADATA_DATA_DIR', p_filename );
DBMS_LOB.FILEOPEN( v_bfile );
DBMS_LOB.LOADFROMFILE( v_blob, v_bfile, Dbms_Lob.getlength( v_bfile ) );
DBMS_LOB.FILECLOSE( v_bfile );
END;

PROCEDURE BLOB_TO_FILE( p_id IN demo.tk%TYPE )
IS
v_blob BLOB;
v_file_name VARCHAR2(100);
v_file_ext VARCHAR2(3);
v_len NUMBER;
v_my_vr RAW(32767);
v_start_pos NUMBER := 1;
v_byte_len NUMBER := 32767;
v_output UTL_FILE.FILE_TYPE;

BEGIN
-- get the blob locator
SELECT theBlob, file_name, file_ext
INTO v_blob, v_file_name, v_file_ext
FROM demo
WHERE tk = p_id;

v_output := UTL_FILE.FOPEN('DATA_DIR', v_file_name||'_o.'||v_file_ext, 'w', 32767);

-- find the length of the blob column
v_len := DBMS_LOB.GETLENGTH(v_blob);
DBMS_OUTPUT.PUT_LINE('Column Length: ' || TO_CHAR(v_len));

IF v_len < v_byte_len THEN
DBMS_LOB.READ( v_blob, v_len, v_start_pos, v_my_vr);
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR(v_len) || 'Bytes');
ELSE
DBMS_LOB.READ( v_blob, v_byte_len, v_start_pos, v_my_vr);
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR(v_byte_len) || ' Bytes ');
END IF;

v_start_pos := v_start_pos + v_byte_len;

WHILE ( v_start_pos < v_len )
LOOP
-- loop till entire data is fetched
DBMS_LOB.READ( v_blob, v_byte_len, v_start_pos, v_my_vr);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR( v_byte_len + v_start_pos-1) || ' Bytes ');
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
v_start_pos := v_start_pos + v_byte_len ;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Done!');
END;



Tom Kyte
October 25, 2006 - 6:05 pm UTC

something fishy, what is line 7? what is metadata.get_lob?

Leo, October 25, 2006 - 7:22 pm UTC

IÂ’m using Toad to execute the code below where METADATA is my schema name and GET_LOB is the package name in which FILE_TO_BLOB and BLOB_TO_FILE are procedures.

DECLARE
P_ID NUMBER;

BEGIN
P_ID := 61;

METADATA.GET_LOB.BLOB_TO_FILE ( P_ID );
COMMIT;
END;


Tom Kyte
October 25, 2006 - 9:49 pm UTC

ok, so....

what is line 135?

Leo, October 26, 2006 - 11:00 am UTC

I apologize for my incomplete code. The error was showing line 135 because I had some extra code and extra spaces which I removed from the procedures. The code below is exactly how it appears in my package and procedures.

ORA-29285: file write error
ORA-06512: at "SYS.UTL_FILE", line 140
ORA-06512: at "SYS.UTL_FILE", line 379
ORA-06512: at "SYS.UTL_FILE", line 1062
ORA-06512: at "METADATA.GET_LOB", line 70
ORA-06512: at line 7 .

Error at line 7 refers to METADATA.GET_LOB.BLOB_TO_FILE ( P_ID ); which is in the pl/sql block below which I use to call my procedure.

DECLARE
P_ID NUMBER;

BEGIN
P_ID := 61;

METADATA.GET_LOB.BLOB_TO_FILE ( P_ID );
COMMIT;
END;

Error at line 70 refers to UTL_FILE.FFLUSH( v_output); inside the while loop in procedure BLOB_TO_FILE.

CREATE OR REPLACE PACKAGE BODY METADATA.Get_Lob
AS

PROCEDURE FILE_TO_BLOB( p_filename IN VARCHAR2 )
AS
v_blob BLOB;
v_bfile BFILE;
v_file_name VARCHAR2(100);
v_file_ext VARCHAR2(3);
BEGIN
--file name without extention
v_file_name := SUBSTR( p_filename, 1, INSTR( p_filename, '.')-1);
--extention of file
v_file_ext := SUBSTR( p_filename, INSTR( p_filename, '.')+1, 5);

INSERT INTO demo (theblob, file_name, file_ext)
VALUES ( EMPTY_BLOB(), v_file_name, v_file_ext)
RETURNING theBlob INTO v_blob;

v_bfile := BFILENAME( 'METADATA_DATA_DIR', p_filename );
DBMS_LOB.FILEOPEN( v_bfile );
DBMS_LOB.LOADFROMFILE( v_blob, v_bfile, DBMS_LOB.GETLENGTH( v_bfile ) );
DBMS_LOB.FILECLOSE( v_bfile );
END;

PROCEDURE BLOB_TO_FILE( p_id IN demo.tk%TYPE )
IS
v_blob BLOB;
v_file_name VARCHAR2(100);
v_file_ext VARCHAR2(3);
v_len NUMBER;
v_my_vr RAW(32767);
v_start_pos NUMBER := 1;
v_byte_len NUMBER := 32767;
v_output UTL_FILE.FILE_TYPE;

BEGIN
-- get the blob locator
SELECT theBlob, file_name, file_ext
INTO v_blob, v_file_name, v_file_ext
FROM demo
WHERE tk = p_id;

v_output := UTL_FILE.FOPEN('METADATA_DATA_DIR', v_file_name||'_o.'||v_file_ext, 'w', 32767); --v_file_name||'_o.'||v_file_ext

-- find the length of the blob column
v_len := DBMS_LOB.GETLENGTH(v_blob);
DBMS_OUTPUT.PUT_LINE('Column Length: ' || TO_CHAR(v_len));

IF v_len < v_byte_len THEN
DBMS_LOB.READ( v_blob, v_len, v_start_pos, v_my_vr);
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR(v_len) || 'Bytes');
ELSE
DBMS_LOB.READ( v_blob, v_byte_len, v_start_pos, v_my_vr);
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR(v_byte_len) || ' Bytes ');
END IF;

v_start_pos := v_start_pos + v_byte_len;

WHILE ( v_start_pos < v_len )
LOOP
-- loop till entire data is fetched
DBMS_LOB.READ( v_blob, v_byte_len, v_start_pos, v_my_vr);
DBMS_OUTPUT.PUT_LINE('Read ' || TO_CHAR( v_byte_len + v_start_pos-1) || ' Bytes ');
UTL_FILE.PUT( v_output, UTL_RAW.CAST_TO_VARCHAR2(v_my_vr) );
UTL_FILE.FFLUSH( v_output);
v_start_pos := v_start_pos + v_byte_len ;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Done!');
END;
END; --end get_lob
/

The output below remains the same as before.

Column Length: 2510880
Read 32767 Bytes
Read 65534 Bytes
:
:
Read 2359224 Bytes
Read 2391991 Bytes
Read 2424758 Bytes


Tom Kyte
October 26, 2006 - 12:23 pm UTC

ahh, well, you are writing a text file, utl_file has an implementation restriction that says "no more than 32k without a newline"

meaning, for a text file, you need to have a newline AT LEAST every 32k.

why are you not using the RAW interface directly there??

More help please.

Aru, October 26, 2006 - 6:38 pm UTC

Hi Tom,

Above in the thread you suggested

"it is just utl_file.put_raw. you'll have a blob, you'll dbms_lob.read or dbms_lob.substr or whatever part of the blob, and utl_file.put_raw it."

for 9iR2 databases. Please can you suggest some documentation or an example for putting the above into practice.

Thanks as always,
Regards,
Aru,

Tom Kyte
October 27, 2006 - 7:39 am UTC

the plsql supplied packages guide documents utl_file

FTP thru PL/SQL failing

Anil Tanwar, January 29, 2007 - 4:31 am UTC

Hi Tom,

Please refer to your followup to Steve Cover in which you have done the following

------------
http://www.oracle-base.com/Articles/9i/FTPFromPLSQL9i.php
Once you install ftp.pks and ftp.pkb, you can:

tkyte@ORA8I> @z:/home/tkyte/Desktop/test
tkyte@ORA8I> DECLARE
2 l_conn UTL_TCP.connection;
3 l_blob blob;
4 BEGIN
5
6 select longname into l_blob from test_table where rownum = 1;
7
8 l_conn := ftp.login('172.16.95.2', '21', 'tkyte', 'password');
9 ftp.binary(p_conn => l_conn);
10 ftp.put_remote_binary_data
11 (p_conn => l_conn,
12 p_file => '/tmp/foo.dat',
13 p_data => l_blob );
14 ftp.logout(l_conn);
15 utl_tcp.close_all_connections;
16 END;
17 /

PL/SQL procedure successfully completed.


You can actually *ftp* the data to yourself ;)

--------------

I have followed the same example to FTP a binary file to a remote location.

I am able to establish connection ( I opened debug window through toad and saw that connection was taking place also the user was getting authenticated) but ftp.put_remote_binary_data is throwing following errors.

ORA-00604: error occurred at recursive SQL level 1
ORA-01460: unimplemented or unreasonable conversion requested.

Please help.


Tom Kyte
January 31, 2007 - 1:38 pm UTC

show us an example of your table and data.

and a version would be nice
as well as platform
and perhaps the lines of code this is happening on.... eg: what line of plsql is throwing this.

Can we write into text file from blob column

RK, May 24, 2007 - 1:11 pm UTC

Hi,

I have to write a blob column content (basically a signature captured file) to text file? Is this possible?
Tom Kyte
May 26, 2007 - 11:34 am UTC

sure, and this page - well - is sort of dedicated to that.

how do we query data stored in a blob

padmaja, July 13, 2007 - 1:19 am UTC

hello ,

database is 10g .
sample table consists of id_no , blob ( pdf files ) .

i'm able to store PDF Files in a blob and also retrieve PDF files for given id_no . my problem is based on the content in the blob i want to retrieve id_no .

i have searched Application guide - for large objects but not much help .

waiting for your response .

thanks .
padmaja.

Tom Kyte
July 13, 2007 - 10:13 am UTC

you can index them using Oracle Text - and then search for them using the contains operator

http://www.oracle.com/pls/db102/portal.portal_db?selected=3#index-TEX

padmaja, July 18, 2007 - 12:00 am UTC

thanks a lot . it worked .

Storing files using packages

Ramkumar, September 04, 2007 - 11:06 pm UTC

Dear Tom,
Hoo to store a file to database which contains id and BLOB using packages. I have to write Stored Procedures inside package and package body.
also tell me to retrieve the file using package.
Thanks and Regards,
N.ramkumar
Tom Kyte
September 05, 2007 - 2:00 pm UTC

huh?

this specification is so poorly worded - I don't know what you are saying.

"how to store a file to database which contains id and blob using packages"

a file contains an ID and a BLOB?

Yay outsourcing!

A reader, September 05, 2007 - 2:33 pm UTC


Excellent

Av, October 04, 2007 - 9:37 am UTC

Great Work Tom, Keep it up and thanks a lot

Retrieving Data from BLOB into a file

prabhu, November 16, 2007 - 3:48 pm UTC

The complete PL/SQL code that Robert Hanrahan has provided was really helpful. I had just one day to get the files from the DB and save it to DVD and searched for hrs before I found this solution. I had never worked on file retreival process before but have PL/SQL experience. I am not a java person so this was the best solution.

Thanks, Prabhu.

Getting data from blob into file

Ricky, January 16, 2008 - 12:40 am UTC

Hi,
I was tring to load a blob data into a file and I was able to load the file. but the problem is instead of recieving 60 kb of text file and 68 kb of pdf file i am recieving 62 kb and 70 kb respectively.
I am able to open the text file and view the content init, but for pdf file when i try to open it i am getting the following error:

'An unrecongnized token 'gd' was found'
Is my Editor not capable of dealing with the leading bytes in an unicode file?if so what should be done for that


Below is my procedure .Please help me with this problem

create or replace procedure SAVE_TO_FILE
( p_filename in varchar2 )
as
vblob blob;
i2 number;
amt number := 10000;
len number;
my_vr raw(10000);
l_output utl_file.file_type;
p_dir varchar2(30) default 'DIR';
p_file varchar2(30) := p_filename;
begin
l_output := utl_file.fopen(p_dir, p_file, 'a', 32760);
SELECT content into vblob FROM demo where id = 2002;

len := DBMS_LOB.GETLENGTH(vblob);

dbms_output.put_line('Length of the Column : ' || to_char(len));
i2 := 1;
if len < 10000 then
DBMS_LOB.READ(vblob,len,i2,my_vr);
utl_file.put(l_output, my_vr );
else
DBMS_LOB.READ(vblob,amt,i2,my_vr);
utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
end if;
i2 := i2 + amt;
while (i2 < len) loop
dbms_output.put_line('i2 : ' || to_char(i2));
DBMS_LOB.READ(vblob,amt,i2,my_vr);
utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );
utl_file.fflush(l_output);
i2 := i2 + amt ;
end loop;
utl_file.fclose(l_output);
commit;
end SAVE_TO_FILE;


Tom Kyte
January 16, 2008 - 3:34 pm UTC

why do you commit????? man oh man. why - why - why would you commit in
a) a stored procedure
b) that doesn't even MODIFY anything

making it so that your routine is so unsafe to call from anything....

I can say that by using the varchar (text) methods, you are causing a newline to be injected AT LEAST every 32k (changing the data)


there is a put_raw routine. you have raw data, use the raw routines please.

Karthick, June 11, 2008 - 3:32 am UTC

can i ftp file from server to my local system using ftp pkg.
Tom Kyte
June 11, 2008 - 8:15 am UTC

if you have an ftp service running on your local system - sure.

Getting data from blob into file

Ricky, June 16, 2008 - 9:14 am UTC

with reference to ur reply instead of using the below statement

utl_file.put(l_output, UTL_RAW.CAST_TO_VARCHAR2(my_vr) );

i used this one.

utl_file.put_raw(l_output,(my_vr) );

but still i am getting the file with a larger size than the original file size.

when i try to open the file i get the error as
file is damaged .
can u please help me with this problem
Tom Kyte
June 16, 2008 - 1:26 pm UTC

utl_file.put_raw( varchar )

is not going to work, put_raw expects a raw, so it'll expect that the varchar2 is a bunch of hex characters.

"U" is unavailable.


you should be put_raw'ing the utl_raw.cast_to_varchar2


use put_raw + cast_to_varchar2

blob

A reader, June 17, 2008 - 4:41 pm UTC


Getting strange results when storing a gzipped file

Stewart Bryson, October 16, 2008 - 1:11 pm UTC

Database is 10.2.0.4

On the Unix database server:
$ echo "this is a test file" > test1.txt
$ ls -ltr
total 2
-rw-r--r--   1 eusb009  dbadev        20 Oct 16 12:45 test1.txt
$ 


Now, I run the following procedure in SQL-Plus on the client:

SQL> DECLARE
  2     l_directory       VARCHAR2(30)            := 'WPS_OUT';
  3     l_filename1       VARCHAR2(30)            := 'test1.txt';
  4     l_filename2       VARCHAR2(30)            := 'test2.txt';
  5  
  6     l_blob            BLOB;
  7     l_max_linesize    NUMBER                  := 32767;
  8     l_buffersize      NUMBER;
  9     l_buffer          RAW(32767);
 10     l_amount          NUMBER;
 11     l_offset          NUMBER := 1;
 12  
 13     l_dest_blob       BLOB;
 14     l_src_lob         BFILE;
 15     l_dst_offset      NUMBER := 1;
 16     l_src_offset      NUMBER := 1;
 17     l_lang_ctx        NUMBER := DBMS_LOB.default_lang_ctx;
 18     l_warning         NUMBER;
 19  
 20     l_fh              utl_file.file_type;
 21  BEGIN
 22  
 23  
 24     l_src_lob := bfilename ( l_directory, l_filename1 );
 25  
 26     INSERT INTO t1
 27            ( file_name, file_blob
 28            )
 29     VALUES ( l_filename1, empty_blob()
 30            )
 31            RETURNING file_blob
 32            INTO l_dest_blob;
 33  
 34     -- OPEN the source LOB to get ready to write it
 35     dbms_lob.OPEN (l_src_lob, dbms_lob.lob_readonly);
 36  
 37     dbms_lob.loadblobfromfile ( dest_lob          => l_dest_blob,
 38                                 src_bfile         => l_src_lob,
 39                                 amount            => dbms_lob.getlength(l_src_lob),
 40                                 dest_offset       => l_dst_offset,
 41                                 src_offset        => l_src_offset
 42                               );
 43  
 44     -- now close the soure lob
 45     dbms_lob.CLOSE (l_src_lob);
 46  
 47     SELECT file_blob
 48       INTO l_blob
 49       FROM t1
 50      WHERE file_name=l_filename1;
 51  
 52     -- OPEN the file handle
 53     l_fh := utl_file.fopen( location => l_directory,
 54                             filename => l_filename2,
 55                             open_mode => 'w',
 56                             max_linesize => l_max_linesize);
 57  
 58     -- get information about the LOB being used
 59     l_buffersize := dbms_lob.getchunksize( l_blob ) ;
 60     -- USE the smallest buffer we can
 61     l_buffersize := CASE WHEN l_buffersize < 32767 THEN l_buffersize ELSE 32767 END;
 62  
 63     -- get the amount variable ready for the loop
 64     l_amount := l_buffersize;
 65     -- keep writing output as long as we are getting some from the file
 66     -- we know that we still have content as long as the amount read is the same as the buffer
 67     WHILE l_amount >= l_buffersize
 68     LOOP
 69  
 70        -- READ into the buffer
 71        dbms_lob.READ( lob_loc    => l_blob,
 72                       amount     => l_amount,
 73                       offset     => l_offset,
 74                       buffer     => l_buffer);
 75  
 76        -- reset the offset based on the amount read in
 77        l_offset := l_offset + l_amount;
 78  
 79  
 80        -- now write the contents to the file
 81        utl_file.put_raw ( FILE      => l_fh,
 82                           buffer    => l_buffer,
 83                           autoflush => TRUE);
 84  
 85        -- FLUSH the contents out to the file
 86        utl_file.fflush( FILE => l_fh );
 87  
 88     END LOOP;
 89  
 90     -- CLOSE the file handle
 91     utl_file.fclose( l_fh );
 92  
 93  END;
 94  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.06
SQL> 


Now, back on the Unix server:
$ ls -ltr
total 4
-rw-r--r--   1 eusb009  dbadev        20 Oct 16 12:45 test1.txt
-rw-r--r--   1 dbddw    dba           20 Oct 16 12:46 test2.txt
$ 


Looks good right?

Now, use a gzipped file:
$ rm test2.txt    
rm: test2.txt: override protection 644 (yes/no)? yes
$ gzip test1.txt
$ ls -ltr
total 2
-rw-r--r--   1 eusb009  dbadev        48 Oct 16 12:45 test1.txt.gz
$ 


And I execute the same code with a different filename:

SQL> DECLARE
  2     l_directory       VARCHAR2(30)            := 'WPS_OUT';
  3     l_filename1       VARCHAR2(30)            := 'test1.txt.gz';
  4     l_filename2       VARCHAR2(30)            := 'test2.txt.gz';
  5  
  6     l_blob            BLOB;
  7     l_max_linesize    NUMBER                  := 32767;
  8     l_buffersize      NUMBER;
  9     l_buffer          RAW(32767);
 10     l_amount          NUMBER;
 11     l_offset          NUMBER := 1;
 12  
 13     l_dest_blob       BLOB;
 14     l_src_lob         BFILE;
 15     l_dst_offset      NUMBER := 1;
 16     l_src_offset      NUMBER := 1;
 17     l_lang_ctx        NUMBER := DBMS_LOB.default_lang_ctx;
 18     l_warning         NUMBER;
 19  
 20     l_fh              utl_file.file_type;
 21  BEGIN
 22  
 23  
 24     l_src_lob := bfilename ( l_directory, l_filename1 );
 25  
 26     INSERT INTO t1
 27            ( file_name, file_blob
 28            )
 29     VALUES ( l_filename1, empty_blob()
 30            )
 31            RETURNING file_blob
 32            INTO l_dest_blob;
 33  
 34     -- OPEN the source LOB to get ready to write it
 35     dbms_lob.OPEN (l_src_lob, dbms_lob.lob_readonly);
 36  
 37     dbms_lob.loadblobfromfile ( dest_lob          => l_dest_blob,
 38                                 src_bfile         => l_src_lob,
 39                                 amount            => dbms_lob.getlength(l_src_lob),
 40                                 dest_offset       => l_dst_offset,
 41                                 src_offset        => l_src_offset
 42                               );
 43  
 44     -- now close the soure lob
 45     dbms_lob.CLOSE (l_src_lob);
 46  
 47     SELECT file_blob
 48       INTO l_blob
 49       FROM t1
 50      WHERE file_name=l_filename1;
 51  
 52     -- OPEN the file handle
 53     l_fh := utl_file.fopen( location => l_directory,
 54                             filename => l_filename2,
 55                             open_mode => 'w',
 56                             max_linesize => l_max_linesize);
 57  
 58     -- get information about the LOB being used
 59     l_buffersize := dbms_lob.getchunksize( l_blob ) ;
 60     -- USE the smallest buffer we can
 61     l_buffersize := CASE WHEN l_buffersize < 32767 THEN l_buffersize ELSE 32767 END;
 62  
 63     -- get the amount variable ready for the loop
 64     l_amount := l_buffersize;
 65     -- keep writing output as long as we are getting some from the file
 66     -- we know that we still have content as long as the amount read is the same as the buffer
 67     WHILE l_amount >= l_buffersize
 68     LOOP
 69  
 70        -- READ into the buffer
 71        dbms_lob.READ( lob_loc    => l_blob,
 72                       amount     => l_amount,
 73                       offset     => l_offset,
 74                       buffer     => l_buffer);
 75  
 76        -- reset the offset based on the amount read in
 77        l_offset := l_offset + l_amount;
 78  
 79  
 80        -- now write the contents to the file
 81        utl_file.put_raw ( FILE      => l_fh,
 82                           buffer    => l_buffer,
 83                           autoflush => TRUE);
 84  
 85        -- FLUSH the contents out to the file
 86        utl_file.fflush( FILE => l_fh );
 87  
 88     END LOOP;
 89  
 90     -- CLOSE the file handle
 91     utl_file.fclose( l_fh );
 92  
 93  END;
 94  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> 


And now, on the Unix server:
$ ls -ltr
total 4
-rw-r--r--   1 eusb009  dbadev        48 Oct 16 12:45 test1.txt.gz
-rw-r--r--   1 dbddw    dba           49 Oct 16 12:49 test2.txt.gz
$ gunzip test2.txt.gz

gunzip: test2.txt.gz: unexpected end of file
$ 


Thanks for any help on this. I suspect this may be a "support" thing, where they would likely tell me "we don't support gzipped files in BLOBS".
Tom Kyte
October 17, 2008 - 9:10 pm UTC

od -c that for me, see if there isn't an extra newline being added at the end.

Found the answer

Stewart Bryson, October 17, 2008 - 2:05 pm UTC

At line 55, I opened the filehandle 'w'. I should have opened it 'wb'.

Thanks for not answering too soon. :-)

Results for corrected code

Stewart Bryson, October 20, 2008 - 5:29 pm UTC

On the unix server:
$ echo "this is a test file" > test1.txt
$ gzip test1.txt
$ ls -ltr
total 2
-rw-r--r--   1 eusb009  dbadev        48 Oct 20 17:01 test1.txt.gz
$

Now, execute the modified code, with the filehandle opened with a p_mode of 'wb':
SQL> DECLARE
  2     l_directory       VARCHAR2(30)            := 'WPS_OUT';
  3     l_filename1       VARCHAR2(30)            := 'test1.txt.gz';
  4     l_filename2       VARCHAR2(30)            := 'test2.txt.gz';
  5  
  6     l_blob            BLOB;
  7     l_max_linesize    NUMBER                  := 32767;
  8     l_buffersize      NUMBER;
  9     l_buffer          RAW(32767);
 10     l_amount          NUMBER;
 11     l_offset          NUMBER := 1;
 12  
 13     l_dest_blob       BLOB;
 14     l_src_lob         BFILE;
 15     l_dst_offset      NUMBER := 1;
 16     l_src_offset      NUMBER := 1;
 17     l_lang_ctx        NUMBER := DBMS_LOB.default_lang_ctx;
 18     l_warning         NUMBER;
 19  
 20     l_fh              utl_file.file_type;
 21  BEGIN
 22  
 23  
 24     l_src_lob := bfilename ( l_directory, l_filename1 );
 25  
 26     INSERT INTO t1
 27            ( file_name, file_blob
 28            )
 29     VALUES ( l_filename1, empty_blob()
 30            )
 31            RETURNING file_blob
 32            INTO l_dest_blob;
 33  
 34     -- OPEN the source LOB to get ready to write it
 35     dbms_lob.OPEN (l_src_lob, dbms_lob.lob_readonly);
 36  
 37     dbms_lob.loadblobfromfile ( dest_lob          => l_dest_blob,
 38                                 src_bfile         => l_src_lob,
 39                                 amount            => dbms_lob.getlength(l_src_lob),
 40                                 dest_offset       => l_dst_offset,
 41                                 src_offset        => l_src_offset
 42                               );
 43  
 44     -- now close the soure lob
 45     dbms_lob.CLOSE (l_src_lob);
 46  
 47     SELECT file_blob
 48       INTO l_blob
 49       FROM t1
 50      WHERE file_name=l_filename1;
 51  
 52     -- OPEN the file handle
 53     l_fh := utl_file.fopen( location => l_directory,
 54                             filename => l_filename2,
 55                             open_mode => 'wb',
 56                             max_linesize => l_max_linesize);
 57  
 58     -- get information about the LOB being used
 59     l_buffersize := dbms_lob.getchunksize( l_blob ) ;
 60     -- USE the smallest buffer we can
 61     l_buffersize := CASE WHEN l_buffersize < 32767 THEN l_buffersize ELSE 32767 END;
 62  
 63     -- get the amount variable ready for the loop
 64     l_amount := l_buffersize;
 65     -- keep writing output as long as we are getting some from the file
 66     -- we know that we still have content as long as the amount read is the same as the buffer
 67     WHILE l_amount >= l_buffersize
 68     LOOP
 69  
 70        -- READ into the buffer
 71        dbms_lob.READ( lob_loc    => l_blob,
 72                       amount     => l_amount,
 73                       offset     => l_offset,
 74                       buffer     => l_buffer);
 75  
 76        -- reset the offset based on the amount read in
 77        l_offset := l_offset + l_amount;
 78  
 79  
 80        -- now write the contents to the file
 81        utl_file.put_raw ( FILE      => l_fh,
 82                           buffer    => l_buffer,
 83                           autoflush => TRUE);
 84  
 85        -- FLUSH the contents out to the file
 86        utl_file.fflush( FILE => l_fh );
 87  
 88     END LOOP;
 89  
 90     -- CLOSE the file handle
 91     utl_file.fclose( l_fh );
 92  
 93  END;
 94  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.59
SQL> 

And back on the Unix server:
$ ls -ltr
total 4
-rw-r--r--   1 eusb009  dbadev        48 Oct 20 17:01 test1.txt.gz
-rw-r--r--   1 dbddw    dba           48 Oct 20 17:04 test2.txt.gz
$ gzip -d test2.txt.gz
$ cat test2.txt
this is a test file
$

reading BLOB from database

Manoj, January 12, 2010 - 6:20 am UTC

Hi Tom,

Can you please post a sample code to read BLOB from a table.

Is it possible to decrypt the BLOB values in case they are encrypted. I am not very sure about the encryption and decryption part but I have heard Pega (PegaSystem Product) does it.
Tom Kyte
January 18, 2010 - 12:46 pm UTC

SQL> select blob_column from t;

I am done. Probably not what you wanted, but hey, you got what you asked for.


http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e10645/toc.htm

We can use securefiles and/or encrypted tablespaces in 11g to accomplish transparent data encryption of blobs, yes.



Script to save blog from Database

Dami, February 15, 2010 - 4:47 am UTC

Hi,
This is still along the lines of the questions put up. I'm sorry if it seems I may be repeating something. I want to create a batch file (a pl/sql script) which reads all the blobs in a column and saves each one in a folder of its own on the system (giving each a diff name of course). You said earlier that PLSQL can write using UTL_FILE but this was earys ago. Has there been a simpler method created since then? Can you please direct me to or write a simple script to do what I need? I would really, REALLY appreciate your answer.
Thank you
Tom Kyte
February 16, 2010 - 5:03 pm UTC

same answer as before.

dump blob

A reader, May 20, 2011 - 4:25 pm UTC


export an Image from oracle databse into a file

javanshir, July 11, 2011 - 5:32 pm UTC

hello Tom
I want to present oracle multimedia database in our university but I can't run this code !
I write my code in 5 sql files and execute them in oracle sql developer. first 4 files run successfully but 5.sql has errors. please help me
-------------------1.sql
create or replace directory imgdir as 'c:\demo';

---------------2.sql
drop table imgtable;
create table imgtable (id number,
Image ordsys.ordImage);

-- Insert a row with an empty BLOB.
insert into imgtable values(1,ORDSYS.ORDImage.init());

-- Insert a row with an empty BLOB.
insert into imgtable values(2,ORDSYS.ORDImage.init());
insert into imgtable values(3,ORDSYS.ORDImage.init());
commit;

DECLARE
obj ORDSYS.ORDIMAGE;
ctx RAW(64) := NULL;
BEGIN
-- This imports the image file img71.gif from the IMGDIR directory
-- on a local file system (srcType=file) and sets the properties.

select Image into obj from imgtable where id = 1 for update;
obj.setSource('file','IMGDIR','img71.gif');
obj.import(ctx);

update imgtable set image = obj where id = 1;
commit;

-----------------3.sql
-- This imports the image file img50.gif from the IMGDIR directory
-- on a local file system (srcType=file) and sets the properties.

select Image into obj from imgtable where id = 2 for update;
obj.setSource('file','IMGDIR','img50.gif');
obj.import(ctx);

update imgtable set image = obj where id = 2;
commit;

-- This imports the image file OceanFish.gif from the IMGDIR directory
-- on a local file system (srcType=file) and sets the properties.

select Image into obj from imgtable where id = 3 for update;
obj.setSource('file','IMGDIR','OceanFish.gif');
obj.import(ctx);

update imgtable set image = obj where id = 3;
commit;

END;

--------------------4.sql
DECLARE
image ORDSYS.ORDImage;
idnum integer;
properties_match BOOLEAN;

BEGIN
FOR I IN 1..3 LOOP
SELECT id, image into idnum, image from imgtable where id=I;
dbms_output.put_line('image id: '|| idnum);
properties_match := image.checkProperties();
IF properties_match THEN
DBMS_OUTPUT.PUT_LINE('Check Properties Succeeded');
END IF;

dbms_output.put_line('image height: '|| image.getHeight());
dbms_output.put_line('image width: '|| image.getWidth());
dbms_output.put_line('image MIME type: '|| image.getMimeType());
dbms_output.put_line('image file format: '|| image.getFileFormat());
dbms_output.put_line('BLOB Length: '|| TO_CHAR(image.getContentLength()));
dbms_output.put_line('-------------------------------------------');

END loop;
END;

-------------------5.sql
DECLARE
l_file_name VARCHAR2(10) := 'Test1.gif';
l_file UTL_FILE.file_type;
l_buffer RAW(32767);
l_amount BINARY_INTEGER := 32767;
l_pos INTEGER := 1;
l_blob BLOB;
l_blob_len INTEGER;
BEGIN
-- Get LOB locator
SELECT imgtbl.image.getContent()
INTO l_blob
FROM imgtable imgtbl
WHERE imgtbl.id = 1;

l_blob_len := DBMS_LOB.GETLENGTH(l_blob);

-- Open the destination file.
l_file := UTL_FILE.FOPEN('imgdir',l_file_name,'w', 32767);

-- Read chunks of the BLOB and write them to the file
-- until complete.
WHILE l_pos < l_blob_len LOOP
DBMS_LOB.READ (l_blob, l_amount, l_pos, l_buffer);
UTL_FILE.PUT_RAW(l_file, l_buffer, TRUE);
l_pos := l_pos + l_amount;
END LOOP;

-- Close the file.
UTL_FILE.FCLOSE(l_file);


END;

------------------ after run script I receive this message
Error report:
ORA-29280: invalid directory path
ORA-06512: at "SYS.UTL_FILE", line 29
ORA-06512: at "SYS.UTL_FILE", line 448
ORA-06512: at line 19
29280. 00000 - "invalid directory path"
*Cause: A corresponding directory object does not exist.
*Action: Correct the directory object parameter, or create a corresponding
directory object with the CREATE DIRECTORY command.
Tom Kyte
July 13, 2011 - 1:30 pm UTC

object names are case sensitive when passed around as strings like this:

l_file := UTL_FILE.FOPEN('imgdir',l_file_name,'w', 32767);


your directory name is 'IMGDIR', not 'imgdir'

Simple 1 off way to access blobs

Simon Barker, April 16, 2012 - 9:02 am UTC

Hi,

Not really a question but I came here trying to find as simple a way of getting the contents of a blob as I could. We attach small image files to our finance system and I have a report that picks up any blobs over 1mb (the ones we attach should be under 100k so bigger than 1mb means someones scanned something wrong).

Anyway there are lots of great answers here but I have lazy fingers and they all have too much typing.

We got a 7mb attachment this week but it had no filename and no real way of finding out what it was. This is what I ended up doing:

sqlplus as apps

SQL> select fnd_gfm.construct_download_URL(fnd_web_config.gfm_agent,FILE_ID) FROM
 dual;

The FILE_ID is from my large objects report:

set term off
set linesize 110
set pagesize 50000
col FILE_NAME format A35
col TYPE format A25
col SIZE_MB format 9999.99
break on report
compute sum label "Total LOBS Attached This Week" of SIZE_MB on report
spool blbd.txt
select file_name, file_id,
to_char(upload_date,'dd-mon-rr hh24:mi:ss') UPLOAD_DATE,
file_content_type TYPE,
dbms_lob.getlength(file_data)/1048576 SIZE_MB
from applsys.fnd_lobs
where dbms_lob.getlength(file_data) > 1048576
-- where dbms_lob.getlength(file_data) > 102400
and upload_date > (sysdate - 7)
-- where upload_date > (sysdate - 7)
order by SIZE_MB Desc
/
exit

But you can just query it from applsys.fnd_lobs, anyway the fnd_gfm.construct_download_URL output can be pasted into a web browser that's already connected to the database. This is an example of the output.

http://SERVER.FullyQualifiedDomainName:PORT/pls/INSTANCE/fndgfm/fnd_gfm.get/NUMBER/FILE_ID/FILENAME

So if you know the file_id value from the applsys.fnd_lobs table you can get a URL link to the blob then just connect a HTML session to the database and paste the output in the address bar to get a copy of the blob.

Handy if you only have a couple to examin and don't like typing.

Hope it helps.

PS for curious types the 7mb file appears to be a report that saved itself (whilst still being created) as a blob, no one has any idea why.

Tom Kyte
April 16, 2012 - 3:59 pm UTC

great, but specific, for Oracle apps - thanks!

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here