What is the API you are using -- typically it is the constraining factor. PLSQL limits a varchar2 or raw to be 32k max in a variable (so you can read/write 32k at a time).
For example, here is an example in Java that writes a blob 32,000 bytes at a time:
import java.sql.*;
import java.io.*;
import oracle.jdbc.driver.*;
import oracle.sql.*;
class test {
public static void main (String args []) throws SQLException , FileNotFoundException, IOException
{
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection("jdbc:oracle:thin:@aria:1521:ora8i", "scott", "tiger");
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
stmt.execute( "delete from demo" );
System.out.println( "deleted from demo" );
stmt.execute( "insert into demo (id,theBlob) values (1,empty_blob())" );
System.out.println( "inserted into demo" );
conn.commit();
System.out.println( "committed" );
ResultSet rset = stmt.executeQuery ("SELECT theBlob FROM demo where id = 1 FOR UPDATE");
System.out.println( "Executed Query" );
if(rset.next())
{
System.out.println( "Fetched row " );
BLOB l_mapBLOB = ((OracleResultSet)rset).getBLOB(1);
File binaryFile = new File("/tmp/binary.dat");
FileInputStream instream=new FileInputStream(binaryFile);
int chunk = 32000;
System.out.println( "Chunk = "+ chunk );
byte[] l_buffer = new byte[chunk];
int l_nread = 0;
OracleCallableStatement cstmt =
(OracleCallableStatement)conn.prepareCall( "begin dbms_lob.writeappend( :1, :2, :3 ); end;" );
cstmt.registerOutParameter( 1, OracleTypes.BLOB );
while ((l_nread= instream.read(l_buffer)) != -1) // read from file
{
cstmt.setBLOB( 1, l_mapBLOB );
cstmt.setInt( 2, l_nread );
cstmt.setBytes( 3, l_buffer );
cstmt.executeUpdate();
l_mapBLOB = cstmt.getBLOB(1);
}
instream.close();
conn.commit();
rset.close();
stmt.close();
}
}
}
If you have 8.0, you'll have to use dbms_lob.write NOT dbms_lob.writeappend (or write your own lob append routine, might be easier). This sets the write chunksize to almost 32k (biggest we can do in the thin driver or whenever we use PL/SQL).....
To support UTF-8, you require a database that was created with the UTF-8 characterset.
You should create your database using the UTF-8 character set.
Using UTF-8, you could have one client accessing the database
in English with NLS_LANG set to American_America.US7ASCII and
another client accessing the database with an NLS_LANG value
of Japanese_Japan.JA16SJIS. A database character set of
UTF-8 will be able to accomodate both of these client character
sets, and with minimal overhead.
For more information and other character set considerations,
please see Chapter 3 of the Oracle8i National Language Support
Guide, available online at:
</code>
http://docs.oracle.com/cd/F49540_01/DOC/server.815/a67789/toc.htm <code>