Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Christian.

Asked: January 22, 2001 - 4:31 am UTC

Last updated: July 01, 2013 - 6:52 pm UTC

Version: 8.1.5 and 8.1.6 (Linux)

Viewed 10K+ times! This question is

You Asked

Hello Tom!

I have a performance problem concerning character data of length >4000 characters, which I can't store in a VARCHAR2 field.

I insert and then read back a set of data entries of ca 100kb length. When I do this using a test table consisting of the attributes (id INTEGER, data CLOB), the operation takes approximately 10 times longer than when I use a table consisting of (id INTEGER, data LONG). This effect can be shown with Java using JDBC (classes12.zip) and Perl using DBI (DBI: 1.13; DBD: 1.06), so I suppose this is not the problem of the application, but of the (configuration of the?) Oracle server.

SQL code for creating the tables T1 and T2:

create table t1 (id integer, data clob);
create table t2 (id integer, data long);


The Java code for inserting a CLOB is:

PreparedStatement pstat1 =
conn.prepareStatement
("insert into T1 values (?, empty_clob())");
PreparedStatement pstat2 =
conn.prepareStatement
("select DATA from T1 where id=? for update");
for (int id=1; id<=ENTRIES; id++) {
System.out.println(" Inserting ID " + id + "...");
pstat1.setInt(1, id);
pstat1.executeUpdate();
pstat2.setInt(1, id);
ResultSet res = pstat2.executeQuery();
if (res.next()) {
CLOB clob = (CLOB)res.getClob(1);
clob.putString(1, outString);
}
res.close();
conn.commit();
}

Java code for reading a CLOB:

PreparedStatement pstat =
conn.prepareStatement("select DATA from T1 where id=?");
for (int id=1; id<=ENTRIES; id++) {
pstat.setInt(1, id);
ResultSet res = pstat.executeQuery();
while (res.next()) {
Clob dataClob = res.getClob(1);
String data =
dataClob.getSubString(1, (int)dataClob.length());
//System.out.println(data + ": " + data.length());
System.out.println(" " + id + ": length: " +
data.length() + "; data: " +
data.charAt(0) +
data.charAt(1) + "..." +
data.charAt(data.length()-1));
}
res.close();
}


Now the LONG stuff:
Inserting:

PreparedStatement pstat =
conn.prepareStatement("insert into T2 values (?, ?)");
for (int id=1; id<=ENTRIES; id++) {
System.out.println(" Inserting ID " + id + "...");
pstat.setInt(1, id);
pstat.setCharacterStream
(2, new StringReader(outString.toString()), LENGTH);
pstat.executeUpdate();
conn.commit();
}
pstat.close();


Reading:

pstat = conn.prepareStatement
("select DATA from T2 where id=?");
for (int id=1; id<=ENTRIES; id++) {
pstat.setInt(1, id);
ResultSet res = pstat.executeQuery();
while (res.next()) {
String data = res.getString(1);
//System.out.println(data + ": " + data.length());
System.out.println(" " + id + ": length: " +
data.length() + "; data: " +
data.charAt(0) +
data.charAt(1) + "..." +
data.charAt(data.length()-1));
}
res.close();
}




I would like to use CLOB instead of LONG because of some advantages, but the performance disadvantages I ran into make me doubt.

Can you help me speeding up CLOBs?


Greetings

Christian

and Tom said...

Here are my findings

- the jdbc thin driver WRITES the clob 4,000 bytes at a time. It reads it 32k at a time.

- the jdbc THICK (oci8) driver uses internal API calls and is not limited to 4,000 bytes during the write. It is many times faster.

- the long interface appears to work at about the same speed using either thick or thin.


- using the OCI8 jdbc driver, we can write clobs at about the same speed and read them as fast.

In any case, using DBMS_LOB.READ and WRITE instead of the helper functions will allow you to contol the "chunksize" explicitly. You can chunk the data upto 32k in size. See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:110812348061 <code>for an example using Java. It shows writing a BLOB using 32,000 bytes as the chunksize.

I used the tables:

create table t1 ( id int, data clob );
create table t2 ( id int, data long );

in this test. My outcome was:

clob.jdbc.thick
JDBC driver version is 8.1.6.2.0
2001-01-22 09:45:18.378:Going to write
2001-01-22 09:45:24.626:Going to read ~6.3 seconds
2001-01-22 09:45:24.959:done ~.3 seconds

clob.jdbc.thin
JDBC driver version is 8.1.6.2.0
2001-01-22 09:43:38.04:Going to write
2001-01-22 09:44:17.577:Going to read ~39.5 seconds
2001-01-22 09:44:18.257:done ~.8 seconds


long.jdbc.thick
JDBC driver version is 8.1.6.2.0
2001-01-22 09:52:41.086:Going to write
2001-01-22 09:52:46.519:Going to read ~5.5 seconds
2001-01-22 09:52:47.257:done ~.7 seconds

long.jdbc.thin
JDBC driver version is 8.1.6.2.0
2001-01-22 09:53:53.426:Going to write
2001-01-22 09:53:58.906:Going to read ~5.5 seconds
2001-01-22 09:53:59.28:done ~.3 seconds

This was using a 1.4meg clob as a test.

I used this code:


import java.sql.*;
import oracle.jdbc.driver.*;
import oracle.sql.*;

class JDBCVersion
{

public static void showTimeStamp( String msg )
{
System.out.println( (new
Timestamp(System.currentTimeMillis())).toString() +
":" + msg );
}

public static void test_write(Connection conn) throws Exception
{
PreparedStatement pstat1 =
conn.prepareStatement
("insert into T1 values (?, empty_clob())");

PreparedStatement pstat2 =
conn.prepareStatement
("select DATA from T1 where id=? for update");

showTimeStamp( "Generating data" );
String outString = "Hello World";
for( int i = 1; i < 18; i++ )
outString = outString + outString;
showTimeStamp( "Generated " + outString.length() +
" bytes of data" );

showTimeStamp(" Inserting" );
pstat1.setInt(1, 1);
pstat1.executeUpdate();
showTimeStamp( "Inserted data...." );

pstat2.setInt(1, 1);
ResultSet res = pstat2.executeQuery();
showTimeStamp( "Executed Select" );

if (res.next())
{
showTimeStamp( "Fetched Row" );
CLOB clob = (CLOB)res.getClob(1);
showTimeStamp( "Got Clob" );
clob.putString(1, outString);
showTimeStamp( "put string" );
}
res.close();
conn.commit();
}

public static void test_read(Connection conn ) throws Exception
{
PreparedStatement pstat =
conn.prepareStatement("select DATA from T1 where id=?");

showTimeStamp( "Executing Query" );
pstat.setInt(1, 1);
ResultSet res = pstat.executeQuery();
while (res.next())
{
showTimeStamp( "Fetched row" );
Clob dataClob = res.getClob(1);
showTimeStamp( "got clob" );
String data =
dataClob.getSubString(1, (int)dataClob.length());
showTimeStamp(" got string " + ": length: " +
data.length() + "; data: " +
data.charAt(0) + data.charAt(1) + "..." +
data.charAt(data.length()-1));
}
res.close();
}

public static void main(String args[])throws Exception
{
DriverManager.registerDriver
(new oracle.jdbc.driver.OracleDriver());

Connection conn=DriverManager.getConnection
//("jdbc:oracle:thin:@aria-dev:1521:ora816dev"
("jdbc:oracle:oci8:@ora816dev.us.oracle.com"
,"scott","tiger");

DatabaseMetaData meta=conn.getMetaData();
System.out.println
("JDBC driver version is "+meta.getDriverVersion());

conn.setAutoCommit(false);

Statement stmt = conn.createStatement ();

stmt.execute( "alter session set sql_trace=true" );
stmt.execute( "delete from t1" );

showTimeStamp( "Going to write" );
test_write(conn);
showTimeStamp( "Going to read" );
test_read(conn);
showTimeStamp( "done" );
}
}



import java.io.*;
import java.lang.*;
import java.sql.*;
import java.util.*;
import oracle.jdbc.driver.*;
import oracle.sql.*;


class JDBCVersion2
{

public static void showTimeStamp( String msg )
{
System.out.println( (new
Timestamp(System.currentTimeMillis())).toString() +
":" + msg );
}

public static void test_write(Connection conn) throws Exception
{
PreparedStatement pstat =
conn.prepareStatement("insert into T2 values (?, ?)");
showTimeStamp( "Generating data" );
String outString = "Hello World";
for( int i = 1; i < 18; i++ )
outString = outString + outString;
showTimeStamp( "Generated " + outString.length() +
" bytes of data" );
int LENGTH = outString.length();

System.out.println(" Inserting " );
pstat.setInt(1, 1);
pstat.setCharacterStream
(2, new StringReader(outString.toString()), LENGTH);
showTimeStamp(" Inserting" );
pstat.executeUpdate();
showTimeStamp( "Inserted data...." );
conn.commit();

pstat.close();
}

public static void test_read(Connection conn ) throws Exception
{


PreparedStatement pstat = conn.prepareStatement
("select DATA from T2 where id=?");
pstat.setInt(1, 1);
showTimeStamp( "Executing Query" );
ResultSet res = pstat.executeQuery();
while (res.next()) {
showTimeStamp( "Fetched row" );
String data = res.getString(1);
showTimeStamp( "got long" );
//System.out.println(data + ": " + data.length());
showTimeStamp(" " + 1 + ": length: " +
data.length() + "; data: " +
data.charAt(0) +
data.charAt(1) + "..." +
data.charAt(data.length()-1));
}
res.close();
}

public static void main(String args[])throws Exception
{
DriverManager.registerDriver
(new oracle.jdbc.driver.OracleDriver());

Connection conn=DriverManager.getConnection
("jdbc:oracle:thin:@aria-dev:1521:ora816dev"
//("jdbc:oracle:oci8:@ora816dev.us.oracle.com"
,"scott","tiger");

DatabaseMetaData meta=conn.getMetaData();
System.out.println("JDBC driver version is "+
meta.getDriverVersion());

conn.setAutoCommit(false);

Statement stmt = conn.createStatement ();

stmt.execute( "alter session set sql_trace=true" );
stmt.execute( "delete from t2" );

showTimeStamp( "Going to write" );
test_write(conn);
showTimeStamp( "Going to read" );
test_read(conn);
showTimeStamp( "done" );
}
}






Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Speed of CLOB vs LONG

netroamer, January 05, 2002 - 11:02 pm UTC

I test the java programs. It works at my Win2000. But it doesn't work in 8.1.5 at Solaris 7. I've got error like this:

E:\trsbean\work>java JDBCVersion
JDBC driver version is 8.1.6.0.0
2002-01-06 11:55:35.647:Going to write
2002-01-06 11:55:35.728:Generating data
2002-01-06 11:55:35.728:Generated 10240 bytes of data
2002-01-06 11:55:35.728: Inserting
2002-01-06 11:55:35.758:Inserted data....
2002-01-06 11:55:35.768:Executed Select
2002-01-06 11:55:35.768:Fetched Row
2002-01-06 11:55:35.788:Got Clob
2002-01-06 11:55:35.868:put string
Exception in thread "main" java.sql.SQLException: ORA-03113: end-of-file on communication channel

at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:114)
at oracle.jdbc.oci8.OCIDBAccess.check_error(OCIDBAccess.java:1503)
at oracle.jdbc.oci8.OCIDBAccess.commit(OCIDBAccess.java:397)
at oracle.jdbc.driver.OracleConnection.commit(OracleConnection.java:479)

at JDBCVersion.test_write(JDBCVersion.java:51)
at JDBCVersion.main(JDBCVersion.java:101)

What does it mean?


CLOB reader

Sheetal Mohan Sharma, January 24, 2006 - 6:35 pm UTC

I was not able write in DB more than 4000 chars using clob. this artilce gave me right direction.

benchmark clob vs long

lizhuohua, January 25, 2013 - 7:55 am UTC

Hi Tom,
  I have a question about the speed of clob.
  Please take a look at this test:
create table lizh_clob(a clob);
create table lizh_long(a long); 
set serveroutput on;
declare
  lc_test clob;
  ll_test long;
  ln_start_time number;--  dbms_utility.get_time;
  ln_end_time number;
  ln_start_cpu number;
  ln_end_cpu number;
  ln_start_redo number;
  ln_end_redo number;
  procedure print
  is
  begin
    dbms_output.put_line('elapse :'|| round((ln_end_time - ln_start_time)/100,2) ||' seconds.');
    dbms_output.put_line('cpu used by this session :'|| (ln_end_cpu - ln_start_cpu) );
    dbms_output.put_line('redo size :'|| (ln_end_redo - ln_start_redo) );
  end;
begin
  lc_test := rpad('a',10000,'a');
  ll_test := rpad('a',10000,'a');
  dbms_output.put_line('loading clob....');
  select value into ln_start_cpu from v$mystat where statistic#=12;
  select value into ln_start_redo from v$mystat where statistic#=134;
  ln_start_time := dbms_utility.get_time;
  -------------------------------------------
  for i in 1..1000 loop
    insert into lizh_clob values(lc_test);
  end loop;
  -------------------------------------------
  commit;
  select value into ln_end_cpu from v$mystat where statistic#=12;
  select value into ln_end_redo from v$mystat where statistic#=134;
  ln_end_time := dbms_utility.get_time;
  print();
  dbms_output.put_line('loading long....');
  select value into ln_start_cpu from v$mystat where statistic#=12;
  select value into ln_start_redo from v$mystat where statistic#=134;
  ln_start_time := dbms_utility.get_time;
  -------------------------------------------
  for i in 1..1000 loop
    insert into lizh_long values(ll_test);
  end loop;
  -------------------------------------------
  commit;
  select value into ln_end_cpu from v$mystat where statistic#=12;
  select value into ln_end_redo from v$mystat where statistic#=134;
  ln_end_time := dbms_utility.get_time;
  print();
end;
/


SQL> @c:\work\test\clob_vs_long.sql
loading clob....                                                                
elapse :2.73 seconds.                                                           
cpu used by this session :61                                                    
redo size :25600868                                                             
loading long....                                                                
elapse :.17 seconds.                                                            
cpu used by this session :14                                                    
redo size :10870236                                                             

PL/SQL procedure successfully completed.

I run it couple of times, and set different data size(like rpad('a',1000,'a')). Get similar output.
Could you please explain , why the redo size for clob is nearly twice of the long?
Is there any other reasons which can cause the long is faster than clob?
Or I'm missing somting here?
(tested on 10g)

Tom Kyte
January 30, 2013 - 1:57 pm UTC

longs are cached, longs are stored inline.

clobs are not cached by default (although they certainly can be). clobs are stored out of line when the exceed 4000 bytes.

a table with longs in it is *very very* limited in what you can do to that table, no reorgs, nothing online, no distributed, no create table as select, no working with that column in plsql and so on.


also, the speed of the long would slow down *seriously* once you fill your buffer cache (so if the table you are loading is large - this "fastness" won't last very long at all)


Using runstats - your code doesn't work except on your machine - statistic#'s change all over the place - the numbers were meaningless on my system!

ops$tkyte%ORA11GR2> create table lizh_clob(a clob) lob (a) store as basicfile (cache);

Table created.

ops$tkyte%ORA11GR2> create table lizh_long(a long);

Table created.

ops$tkyte%ORA11GR2> declare
  2    lc_test clob := rpad('a',10000,'a');
  3    ll_test long := rpad('a',10000,'a');
  4  begin
  5    runStats_pkg.rs_start;
  6    -------------------------------------------
  7    for i in 1..1000 loop
  8      insert into lizh_clob values(lc_test);
  9    end loop;
 10    -------------------------------------------
 11    runStats_pkg.rs_middle;
 12    -------------------------------------------
 13    for i in 1..1000 loop
 14      insert into lizh_long values(ll_test);
 15    end loop;
 16    -------------------------------------------
 17    runStats_pkg.rs_stop(10000);
 18  end;
 19  /
Run1 ran in 14 cpu hsecs
Run2 ran in 11 cpu hsecs
run 1 ran in 127.27% of the time

Name                                  Run1        Run2        Diff
STAT...table scan rows gotten       12,971         868     -12,103
STAT...undo change vector size     258,684     201,608     -57,076
STAT...physical read bytes         139,264           0    -139,264
STAT...physical read total byt     139,264           0    -139,264
STAT...cell physical IO interc     139,264           0    -139,264
STAT...session pga memory         -196,608           0     196,608
STAT...redo size                11,453,096  11,037,096    -416,000
STAT...logical read bytes from 128,499,712 130,498,560   1,998,848

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
      58,996      63,158       4,162     93.41%

PL/SQL procedure successfully completed.


redo for clob

Lizhuohua(John), January 30, 2013 - 8:31 pm UTC

Hi Tom,
  Thanks for your comment, I got the similar output on 11g. 
But on 10G:
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE    10.2.0.3.0      Production
TNS for Linux: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production

SQL> DROP TABLE LIZH_CLOB;
DROP TABLE LIZH_long;
Table dropped.

SQL>

Table dropped.

SQL> create table lizh_clob(a clob) lob (a) store as basicfile (cache);
create table lizh_long(a long);
declare
  lc_test clob := rpad('a',10000,'a');
  ll_test long := rpad('a',10000,'a');
begin
  runStats_pkg.rs_start;
  -------------------------------------------
  for i in 1..1000 loop
    insert into lizh_clob values(lc_test);
  end loop;
  -------------------------------------------
  runStats_pkg.rs_middle;
  -------------------------------------------
  for i in 1..1000 loop
    insert into lizh_long values(ll_test);
  end loop;
  -------------------------------------------
  runStats_pkg.rs_stop(10000);
end;
/
Table created.

SQL>
Table created.

SQL>   2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19
Run1 ran in 32 cpu hsecs
Run2 ran in 13 cpu hsecs
Run 1 ran in 246.15 % of the time

Name                                  Run1        Run2        Diff
STAT..physical read total byte      16,384      32,768      16,384
STAT..physical read bytes           16,384      32,768      16,384
STAT..session uga memory                 0      65,408      65,408
STAT..undo change vector size      311,176     149,532    -161,644
STAT..session pga memory          -393,216     131,072     524,288
STAT..redo size                 21,323,356  10,989,476 -10,333,880

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
77,880      70,799      -7,081    110.00%

PL/SQL procedure successfully completed.

Also if turing off archivelog on 11g, redo size for clob (with NOCACHE option)will be very small(Seems there is no log for it):

SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL> drop table lizh_clob;

Table dropped.

SQL> drop table lizh_long;

Table dropped.

SQL> 
SQL> create table lizh_clob(a clob) lob (a) store as basicfile (nocache);

Table created.

SQL> 
SQL> create table lizh_long(a long);

Table created.

SQL> 
SQL> declare
  2    lc_test clob := rpad('a',10000,'a');
  3    ll_test long := rpad('a',10000,'a');
  4  begin
  5    runStats_pkg.rs_start;
  6    -------------------------------------------
  7    for i in 1..1000 loop
  8      insert into lizh_clob values(lc_test);
  9    end loop;
 10    -------------------------------------------
 11    runStats_pkg.rs_middle;
 12    -------------------------------------------
 13    for i in 1..1000 loop
 14      insert into lizh_long values(ll_test);
 15    end loop;
 16    -------------------------------------------
 17    runStats_pkg.rs_stop(10000);
 18  end;
 19  /
Run1 ran in 28 cpu hsecs                                                        
Run2 ran in 8 cpu hsecs                                                         
Run 1 ran in 350 % of the time                                                  
                                                                                
Name                                  Run1        Run2        Diff              
LATCH.cache buffers chains          34,977      46,807      11,830              
STAT..undo change vector size      225,552     134,020     -91,532              
STAT..redo size for direct wri     102,068           0    -102,068              
STAT..session pga memory         2,228,224           0  -2,228,224              
STAT..redo size                    859,028  10,969,788  10,110,760              
STAT..physical write bytes      16,384,000           0 -16,384,000              
STAT..physical write total byt  16,384,000           0 -16,384,000              
STAT..cell physical IO interco  16,384,000           0 -16,384,000              
                                                                                
Run1 latches total versus runs -- difference and pct                            
Run1        Run2        Diff       Pct                                          
56,604      60,428       3,824     93.67%                                       

PL/SQL procedure successfully completed.


There is LOB index for clob, so redo size is about twice of the data can make sense in 10G. But have no idea about what the difference with 11G and 10G, And why noarchivelog mode can significantly reduce the redo size in 11g.
We are planning to upgrade to 11g, so Can we reduce redo size by setting cache for clob? Is there any disadvantage for using cache? (We have lots of insertion for lob, less then 10% may be read later). 


BTW: Sorry for using meaningless statistic# in my previous input.

Thanks,
John

Tom Kyte
January 31, 2013 - 2:28 pm UTC

I cannot reproduce

ops$tkyte%ORA10GR2> create table lizh_clob(a clob) lob (a) store as basicfile (cache);

Table created.

ops$tkyte%ORA10GR2> create table lizh_long(a long);

Table created.

ops$tkyte%ORA10GR2> declare
  2    lc_test clob := rpad('a',10000,'a');
  3    ll_test long := rpad('a',10000,'a');
  4  begin
  5    runStats_pkg.rs_start;
  6    -------------------------------------------
  7    for i in 1..1000 loop
  8      insert into lizh_clob values(lc_test);
  9    end loop;
 10    -------------------------------------------
 11    runStats_pkg.rs_middle;
 12    -------------------------------------------
 13    for i in 1..1000 loop
 14      insert into lizh_long values(ll_test);
 15    end loop;
 16    -------------------------------------------
 17    runStats_pkg.rs_stop(10000);
 18  end;
 19  /
Run1 ran in 10 cpu hsecs
Run2 ran in 8 cpu hsecs
run 1 ran in 125% of the time

Name                                  Run1        Run2        Diff
STAT...redo size                10,982,164  10,994,280      12,116
STAT...undo change vector size     230,760     145,020     -85,740

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
      58,246      63,817       5,571     91.27%

PL/SQL procedure successfully completed.






Also if turing off archivelog on 11g, redo size for clob (with NOCACHE
option)will be very small(Seems there is no log for it):


why bother using a database if you are going to turn off archivelog mode? You will lose all modifications since your last full COLD backup some day (will - NOT might - will lose all of your modifications) - so why bother?


with nocache the lob write can be done direct path - which means you'll wait for that IO to complete.

A reader, February 01, 2013 - 2:59 am UTC

Hi Tom,
Thanks for your patient. I tested the same script on couple of 10g env, and get similar result, redo size for clob is about twice as long. Is there any other possible reason?
We will not use noarchivelog, test it only for interesting:)

Thanks,
John
Tom Kyte
February 01, 2013 - 8:33 am UTC

what 10g release are you using, select * from v$version

oracle version

Lizhuohua(John), February 02, 2013 - 6:27 am UTC

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE    10.2.0.3.0      Production
TNS for Linux: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production

Thanks,
John

Tom Kyte
February 04, 2013 - 9:33 am UTC

do you have a patched up current 10.2.0.5 around to test on?

upgrade

Lizhuohua(John), February 04, 2013 - 11:03 pm UTC

Hi Tom,
Thanks, we didn't have the 10.2.0.5 env. Since we are planing upgrade to 11g, if it's an issue which has been resolved in 11g, I'm going to ignore it.
Thanks for your time.

Thanks,
John

clob and characterset

Lizhuohua, June 26, 2013 - 9:40 am UTC


Hi Tom ,
Please take a look at my test:
Run the following script on Two DB(11.2) with different character set. 1.AL32UTF8, 2. WE8MSWIN1252.

create table t(a varchar2(10 byte), b clob ) lob (b) store as (cache);
insert into t values('abcd','ABCD' );
commit;
dump block

Here is the result:
1.
PARAMETER VALUE
------------------------------ ------------------------------
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16

col 0: [ 4] 61 62 63 64
col 1: [44]
00 54 00 01 02 0c 80 00 00 02 00 00 00 01 00 00 00 0e 50 ab 00 18 09 00 00
00 00 00 00 08 00 00 00 00 00 01 00 41 00 42 00 43 00 44

2.
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_CHARACTERSET WE8MSWIN1252
NLS_NCHAR_CHARACTERSET AL16UTF16

col 0: [ 4] 61 62 63 64
col 1: [40]
00 54 00 01 02 0c 00 00 00 01 00 00 00 01 00 00 00 16 bc 83 00 14 09 00 00
00 00 00 00 04 00 00 00 00 00 01 41 42 43 44

For AL32UTF8, clob occupied two bytes for each character, but for WE8MSWIN1252 occupied one.

From my understanding, clob should use same character set as varchar2. Could you please give me an idea why it occupied two bytes for each character for AL32UTF8?

Thanks,
Lizhuohua
Tom Kyte
July 01, 2013 - 6:52 pm UTC

clobs do not use the same character set

http://docs.oracle.com/cd/B10500_01/server.920/a96524/c13datyp.htm#3234

they are stored differently. we want to be able to randomly access them without having to read from the beginning of the string.

clob and characterset

Lizhuohua, July 02, 2013 - 10:10 am UTC

Hi Tom,
Many Thanks for pointing that.
So I think that's why there are more redo generated by CLOB than LONG(Please see my previous post in this thread, almost twice).

BTW, in 11g Doc http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements001.htm#sthref202 say : Both fixed-width and variable-width character sets are supported, and both use the database character set. didn't mention that clob using two-byte Unicode character set, which has a fixed width.
and http://docs.oracle.com/cd/B19306_01/server.102/b14220/datatype.htm#sthref3864 only mentioned Storing varying-width LOB data in a fixed-width Unicode character set internally. Didn't say two-byte Unicode character set.
It makes a little confusion:)
Thanks,
Lizhuohua

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here