Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Julie.

Asked: August 08, 2000 - 3:19 am UTC

Last updated: September 28, 2011 - 5:40 am UTC

Version: 8.1.6

Viewed 50K+ times! This question is

You Asked

How to I convert my long columns into LOBS?

and we said...



Thats easy -- see the TO_LOB function available in Oracle8i release 8.1 and up. TO_LOB can be use in insert, create table as select, and update statements to convert

o a LONG into a CLOB
o a LONG RAW into a BLOB

Note that PLSQL does not recognize this function, hence if you attempt to use it in a stored procedure, it will fail. You should use dynamic sql to avoid that issue. For example:

ops$tkyte@ORA816.US.ORACLE.COM> create table t1
( x int, y long );

Table created.

ops$tkyte@ORA816.US.ORACLE.COM> create table t2
( x int, y clob );

Table created.

ops$tkyte@ORA816.US.ORACLE.COM>
ops$tkyte@ORA816.US.ORACLE.COM> insert into t1 values
( 1, rpad('*',4000,'*') );

1 row created.

ops$tkyte@ORA816.US.ORACLE.COM>
ops$tkyte@ORA816.US.ORACLE.COM> insert into t2
select x, to_lob(y) from t1;

1 row created.

ops$tkyte@ORA816.US.ORACLE.COM>
ops$tkyte@ORA816.US.ORACLE.COM> begin
2 insert into t2 select x, to_lob(y) from t1;
3 end;
4 /
begin
*
ERROR at line 1:
ORA-06550: line 2, column 27:
PLS-00201: identifier 'TO_LOB' must be declared
ORA-06550: line 2, column 2:
PL/SQL: SQL Statement ignored


ops$tkyte@ORA816.US.ORACLE.COM>
ops$tkyte@ORA816.US.ORACLE.COM> begin
2 execute immediate
3 'insert into t2 select x, to_lob(y) from t1';
4 end;
5 /

PL/SQL procedure successfully completed.

Rating

  (65 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Edward R, March 13, 2001 - 2:44 pm UTC

Very terse and helpful information.
Thanks for saving my time.

A reader, June 08, 2001 - 6:20 am UTC


what else we want for this question .

Gururaj, June 14, 2001 - 12:16 pm UTC

Hey thank you Tom. The answer is just perfect.

Oder by a clob column

Andre W Nasser, October 10, 2001 - 5:53 pm UTC

The manual says I cannot user order by with a clob type. How can simulate that, like a function that is the opposite of TO_LOB, like TO_CHAR(<clob>) ?

Tom Kyte
October 10, 2001 - 6:38 pm UTC

you can order by the first 4000 characters of the clob only:

select * from t order by dbms_lob.substr(clob_col, 4000,1 );

LONG TO BLOB THRU DB LINK

reddy, January 31, 2002 - 9:44 am UTC

Hi Tom ,
I have many tables with long raw data types in a remote db and we r developing an application on the local db and need to access these tables ...can i insert the remote table data in to the local db tables and convert the data type to the blob in the local db by using db links..pl help


Tom Kyte
January 31, 2002 - 6:14 pm UTC

conversion in 8.0.5

Armin Eberle, June 10, 2002 - 9:07 am UTC

How can I convert from LONG to BLOB in 8.0.5??

Look at oracle docid 69627.1 and then Toms answer !

Hubertus Krogmann, June 11, 2002 - 9:26 am UTC

Exactlly as we need it, but is there a solution for:

alter table x add ( c clob ) select to_lob(x.l) from x ;
alter table x drop colum l ;

means add a clob column by inserting a long column on the fly ?

Tom Kyte
June 11, 2002 - 1:35 pm UTC

In 9i there is an


alter table x modify ( c clob );

that'll do that in one statement (you have to rebuild all indexes on that table afterwards as well since it tends to rewrite it)

I don't fully understand your approach above. In 8i you can:

alter table x add ( c clob );
update x set c = to_lob(l);
alter table x drop column l;

BUT you'll probably find a parallel create table as select UNRECOVERABLE to be much much faster.

What about updates

A P Clarke, September 27, 2002 - 12:05 pm UTC

Tom

You state that it is possible to use TO_LOB in update statements.  Unfortunately the documentation and my experience contradict you:

SQL> desc t_long
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PK_COL                                             NUMBER
 COL1                                               LONG

SQL> desc t_clob
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PK_COL                                             NUMBER
 COL1                                               CLOB

SQL> INSERT INTO t_clob (pk_col, col1)
  2  SELECT t.pk_col, to_lob(t.col1)
  3  FROM   t_long t, tl_log l
  4  WHERE  l.action = 'I'
  5  AND    t.pk_col = l.tl_pk_col
  6  /

3 rows created.

SQL> DECLARE
  2  ln number := 0;
  3  BEGIN
  4     FOR lrec IN ( SELECT t.col1, t.pk_col
  5                   FROM   t_long t, tl_log l
  6                   WHERE  l.action = 'U'
  7                   AND    l.timestamp < sysdate
  8                   AND    t.pk_col = l.tl_pk_col )
  9     LOOP
 10        EXECUTE IMMEDIATE
 11        'UPDATE t_clob c
 12        SET    c.col1 = to_lob(:1)
 13        WHERE c.pk_col = :2' USING lrec.col1, rec.pk_col;
 14     END LOOP;
 15  END;
 16  /

DECLARE
*
ERROR at line 1:
ORA-00932: inconsistent datatypes
ORA-06512: at line 10

I really need to be able to update a CLOB column from a LONG column.  

Summary: we need to replicate a table that has a LONG column.  Replication does not support LONGs.  The application is currently in Forms 4.5 (I know, I know) which doesn't support CLOBs, or indeed VARCHAR2(4000) fields.  I am attempting to simulate replication using my own trigger, log table and stored procedure.  This works for INSERT but not for UPDATE.  The docs say TO_LOB only works for INSERTs.  So how can I update changed records (other than by a DELETE-INSERT combo)?

Thanks in advance, APC

 



 

Tom Kyte
September 27, 2002 - 2:00 pm UTC

your right, i should not have said "update"


it'll be delete/insert.

to_lob with SELECT INTO

Craig, May 06, 2003 - 3:21 pm UTC

Can to_lob be used in a select into query? I'm not having any luck. I'm seeing an incosistent datatypes error. Is this even possible?

DECLARE
lv_view_clob clob := empty_clob();
lv_view_name varchar2(10) := 'my_view';
EXECUTE IMMEDIATE 'select text from dba_views where owner = USER and view_name = :view_name'
INTO lv_view_clob
USING lv_view_name;
end;


Tom Kyte
May 06, 2003 - 3:44 pm UTC

if the view text is 32k or less (there is a length field in there) then


declare
l_text long;
begin

select text
into L_text
from dba_views
where owner = user
and view_name = lv_view_name;
...

will work -- else you will have to TO_LOB it into a global temporary table and then retreive the lob locator from there.

to_lob

umesh, June 23, 2003 - 8:02 am UTC

if t1 has a column which is of long type why do i get this

1 begin
2 execute immediate 'create or replace view v1 as select to_lob(y) yy from t1';
3* end;
scott>/
begin
*
ERROR at line 1:
ORA-00932: inconsistent datatypes
ORA-06512: at line 2


Tom Kyte
June 23, 2003 - 8:23 am UTC

because to_lob is documented to work in two cases:

o in a CREATE TABLE .. AS SELECT ...
o in an INSERT INTO .. SELECT ...

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a85397/function.htm#79466 <code>

it is used to perform one time, permanent conversions of LONGS to CLOBS.

lob to longraw

Moiz, August 05, 2003 - 2:04 am UTC

Hi tom ,
We are implementing anywhere banking replication. and in this replication , i have a requierement in my signature table that i would like to convert lob column in to longraw column ..
Pls do the needful ...
regards,
Moiz Arviwala

Tom Kyte
August 05, 2003 - 7:17 am UTC

the needful in this case is to say "doesn't work that way".

there isn't anything in the database to convert the correct datatype (lob) into the deprecated datatype (long raw). long raw -> lob, yes (to_lob). the other way -- no, doesn't exist.

why Oracle still uses LONG?

A reader, February 20, 2004 - 7:44 am UTC

Hi

Oracle have been teling users to migrate long to LOBs since Oracle 8. Ironically I still see Oracle uses LONG in data dictionary, even in 10g. Why so? Isnt it very easy to migrate long to lobs?

open rc1 for l_sql_clob

Justin, March 25, 2004 - 12:34 pm UTC

When building a dynamic sql statement in PL/SQL using native dynamic sql, what would you use for the datatype of the variable storing the statement? I have seen you use LONG in some examples on this site. I would rather use CLOB, but I get "PLS-00382 expression is of wrong type" when I try to open the ref cursor passing it a CLOB.

eg.
OPEN rc1 FOR l_sql; -- w/ l_sql being a CLOB)
brings up the 00382.

So, I decided to say

OPEN rc1 FOR TO_CHAR(l_sql);

and it worked fine. Would you consider that a fine approach?

Also, is it processing intensive to convert a CLOB to a CHAR for the system? It seemed to execute just as fast as using LONG.

Thanks much for your help.

Tom Kyte
March 25, 2004 - 9:14 pm UTC

long and if longer than 32k -- a plsql table and DBMS_SQL.

(but I'd be questioning any QUERY (select) that approaches 32k in the first place!)

thanks much

Justin, April 02, 2004 - 10:47 am UTC

Do you have any anxiety issues with what I did using the CLOB?

I won't give the long winded reason why LONG is a problem for us, but in short, it's strictly a debug issue...





Tom Kyte
April 02, 2004 - 1:29 pm UTC

officially, it should not work as execute immediate takes a plsql variable as input and the are officially 32k.

it is working by accident for strings >32k and that accident might "change" in the future.

the only support for sql statements >32k in plsql would be dbms_sql with the parse interface that takes a plsql table type.

Gotcha

Justin, April 05, 2004 - 1:10 pm UTC

Indeed, we have no statments close to 32k, and as such will continue on.

Very good help. Thanks!

Convert LONG to NCLOB

A Reader, November 22, 2004 - 9:07 pm UTC

Hi Tom,

When we tried to convert a LONG datatype in a table to NCLOB to support multi-languages. We saw that the tablespace usage for the table grows drmatically. 1 gig of data would become 8 gigs. The LONG was in WE8ISO8859P1 and the NCLOB is in UTF-8. Is it true that the space usage would increase so much when using NCLOB? If so, is there any way we can reduce the space usage when using NCLOB?

Thank you in advance for your help.

Tom Kyte
November 22, 2004 - 9:30 pm UTC

CLOB's and NCLOB's in a varying width character set database (utf-8 being one of them) are stored in a fixed width character set (in order to avoid having to always start at byte 1 in order to get to character 432,413).

The nclob is really stored as UCS2

Also contributing to this is the manner in which lobs are stored. there is

a) a lob index (think of this as a primary key on "lob id, seq, <chunk>" where lob id is the lob locator and seq is a sequence number -- <chunk> is a pointer to the I'th chunk of data.

b) the lob segment stored in chunks.


You cannot do anything about (a) but (b) you have some control over. Look at your chunksize -- if it is say 64k -- every lob will take a minimum of N*64k of storage -- a chunk is not shared between lobs.

So, if you have lots of smallish lobs -- and a couple of big ones -- you might use a smaller chunksize to avoid "wastage" at the end of the last chunk (at the expense of more chunk overhead on the big ones)

How can be determine the size of CLOB/NCLOB in a table?

A Reader, November 22, 2004 - 10:03 pm UTC

Tom,

Can you please help on how can we determine the size of a CLOB/NCLOB column when it exceeds 4000 bytes? How can we determine how many rows of the CLOB column with in 4000 bytes? How can we find and change the chunck size of a CLOB table?

Thank you.

Tom Kyte
November 23, 2004 - 7:09 am UTC

dbms_lob.getlength tells you how long the clob is, less than about 4000 --it'll be inline, else out of line.

the chunk size is visible in the dictionary view user|all|dba_lobs

Get the length of a LONG column

A Reader, November 29, 2004 - 9:14 pm UTC

Hi Tom,

We are trying to determine the length of a long column by using a function like the following:

CREATE OR REPLACE FUNCTION Get_long_length
RETURN number IS
l_long LONG;
l_return number;
BEGIN
l_return:= 0;
FOR x IN select long_column from t_long LOOP
l_long:= x.long_column;
if(length l_long) > 4000 then
l_return:= l_return + 1;
end if;
END LOOP;
return l_return;
END;
/

When the length of the LONG record is not that long, say, 4500, the function works fine. However, when the length is really long (we don't know how long, we had records that are over 300k for sure), we will get the following errors:

ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 1

My question is: how can we determine the length of a LONG column that is really very long? Our database is 8.1.7.5 and we can not convert the column to CLOB now.

Thank in advance.

Tom Kyte
November 30, 2004 - 7:28 am UTC

ops$tkyte@ORA9IR2> create or replace
  2  function long_length( p_query in varchar2,
  3                        p_name  in varchar2,
  4                        p_value in varchar2 )
  5  return number
  6  as
  7      l_cursor       integer default dbms_sql.open_cursor;
  8      l_n            number;
  9      l_long_val     long;
 10      l_long_len     number;
 11      l_buflen       number := 32760;
 12      l_curpos       number := 0;
 13      l_return_value number;
 14  begin
 15      dbms_sql.parse( l_cursor, p_query, dbms_sql.native );
 16      dbms_sql.bind_variable( l_cursor, p_name, p_value );
 17
 18      dbms_sql.define_column_long(l_cursor, 1);
 19      l_n := dbms_sql.execute(l_cursor);
 20
 21      if (dbms_sql.fetch_rows(l_cursor)>0)
 22      then
 23          loop
 24              dbms_sql.column_value_long(l_cursor, 1, l_buflen, l_curpos ,
 25                                          l_long_val, l_long_len );
 26              l_curpos := l_curpos + l_long_len;
 27              l_return_value := nvl(l_return_value,0) + l_long_len;
 28              exit when l_long_len = 0;
 29        end loop;
 30     end if;
 31     return l_return_value;
 32  exception
 33     when others then
 34        if dbms_sql.is_open(l_cursor) then
 35           dbms_sql.close_cursor(l_cursor);
 36        end if;
 37        raise;
 38  end long_length;
 39  /
 
Function created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select view_name, text_length,
  2         long_length( 'select text from user_views where view_name = :x', ':x', 'V' ) ll
  3    from user_views
  4  /
 
VIEW_NAME                      TEXT_LENGTH         LL
------------------------------ ----------- ----------
V                                    57684      57684
 
ops$tkyte@ORA9IR2>
 

Maximum Length of longs

Kevin, March 08, 2005 - 5:14 pm UTC

Tom,

I was looking through this article because I was moving data between machines using the SQLPlus copy command as the data being moved contained long columns and I wanted to know the maximum column length of the long columns I was moving.

I modified your code above to achieve this as:

CREATE OR REPLACE FUNCTION max_long_length (p_long_select IN VARCHAR2) RETURN NUMBER IS
id_var NUMBER;
name_var VARCHAR2(30);
birthdate_var DATE;
l_cursor INTEGER;
l_buflen number := 32760;
l_curpos number := 0;
l_long_val long;
l_long_len number;
max_length INTEGER;
ignore INTEGER;
BEGIN
max_length := 0;
-- Prepare the cursor ...
l_cursor := dbms_sql.open_cursor;
DBMS_SQL.PARSE(l_cursor,p_long_select,DBMS_SQL.native);
dbms_sql.define_column_long(l_cursor, 1);
ignore := DBMS_SQL.EXECUTE(l_cursor);
-- Fetch a row from the cursor and check its length ...
LOOP
l_curpos := 0;
IF DBMS_SQL.FETCH_ROWS(l_cursor)>0 THEN
-- get the length of the column
loop
dbms_sql.column_value_long(l_cursor, 1, l_buflen, l_curpos ,l_long_val, l_long_len );
l_curpos := l_curpos + l_long_len;
exit when l_long_len = 0;
end loop;
if l_curpos > max_length then
max_length := l_curpos;
end if;
ELSE
-- No more rows to check
EXIT;
END IF;
END LOOP;
-- Close the cursor ...
DBMS_SQL.CLOSE_CURSOR(l_cursor);
return max_length;
EXCEPTION
WHEN OTHERS THEN
IF DBMS_SQL.IS_OPEN(l_cursor) THEN
DBMS_SQL.CLOSE_CURSOR(l_cursor);
END IF;
RAISE;
END;
/

and since the function refused to work on my 8.0.6.3 database I created a Proc to display the results ....

CREATE OR REPLACE PROCEDURE Show_max_long_length (p_long_select_table IN VARCHAR2, p_long_select_col IN VARCHAR2) IS
id_var NUMBER;
name_var VARCHAR2(30);
birthdate_var DATE;
l_cursor INTEGER;
l_buflen number := 32760;
l_curpos number := 0;
l_long_val long;
l_long_len number;
max_length INTEGER;
ignore INTEGER;
BEGIN
max_length := 0;
-- Prepare the cursor ...
l_cursor := dbms_sql.open_cursor;
DBMS_SQL.PARSE(l_cursor,'SELECT ' || p_long_select_col || ' from ' || p_long_select_table,DBMS_SQL.native);
dbms_sql.define_column_long(l_cursor, 1);
ignore := DBMS_SQL.EXECUTE(l_cursor);
-- Fetch a row from the cursor and check its length ...
LOOP
l_curpos := 0;
IF DBMS_SQL.FETCH_ROWS(l_cursor)>0 THEN
-- get the length of the column
loop
dbms_sql.column_value_long(l_cursor, 1, l_buflen, l_curpos ,l_long_val, l_long_len );
l_curpos := l_curpos + l_long_len;
exit when l_long_len = 0;
end loop;
if l_curpos > max_length then
max_length := l_curpos;
end if;
ELSE
-- No more rows to check
EXIT;
END IF;
END LOOP;
-- Close the cursor ...
DBMS_SQL.CLOSE_CURSOR(l_cursor);
dbms_output.put_line(p_long_select_table || ':' || ' Maximum Long Length for ' || p_long_select_col || ' is ' || max_length);
return ;
EXCEPTION
WHEN OTHERS THEN
IF DBMS_SQL.IS_OPEN(l_cursor) THEN
DBMS_SQL.CLOSE_CURSOR(l_cursor);
END IF;
RAISE;
END;
/

Just a couple of questions:
1. What impact does the LONG parameter have in SQLPlus have? i.e does it actually reserve the storage specified or is storage obtained as it is needed up to that limit? The driving question behind this is 'Is it good practise to set this to the maximum available long size or should it be tailored to the long columns being moved?'
2. In your example above why if the function executed against USER_VIEWS with no qualification? Have you only got 1 view in this user?

and also I believe that there should be a dbms_sql.close_cursor(l_cursor); prior to the return from
the function in your example above - when I ran it as a test on my system I got max open cursors exceeded.

Tom Kyte
March 08, 2005 - 7:23 pm UTC

1) have not measured it, we'd have to set up some tests and measure the memory usage. I believe they should be using piecewise fetches, meaning they never hit the maximum extreme, but we'd have to set it up and try it (on my way out in a bit, don't have time to test right now -- but if you do :)

2) 'select text from user_views where view_name = :x',

it has a predicate?

but yes, it looks like a close went missing in there somehow, i have it in the exception block but not in the regular code somehow.

Length of longs

Kevin, March 08, 2005 - 7:50 pm UTC

re 2.

select view_name, text_length,
long_length( 'select text from user_views where view_name ':x', ':x', 'V' ) ll
from user_views
/

no predicate on the query surrounding the function call .... wont that produce a call for each of the views in user_views?

Were you intending something along the lines of :

select view_name, text_length,
long_length( 'select text from user_views where view_name = :x',
':x', view_name ) ll
from user_views
/

I'll try and check memory usage today (was hoping you'd know offhand :-)

Tom Kyte
March 08, 2005 - 11:08 pm UTC

yes, it will -- that was the point -- I wanted the long length for each view.


ops$tkyte@ORA9IR2> select view_name, text_length,
  2         long_length( 'select text from user_views where view_name = :x', 
':x', 'V' ) ll
  3    from user_views
  4  /

is what I had coded? 

It must be me ....

Kevin, March 09, 2005 - 1:33 am UTC

It was just that the literal 'V' as the 3rd parameter meant that it displayed the same value for LL for each view. Maybe that was what you were trying to show?

ie I created 4 views - 3 with a length of 38 and 1 with a length of 47:

1 select view_name, text_length,
2 long_length( 'select text from user_views where view_name = :x', ':x', 'V' ) ll
3* from user_views
17:25:14 cptglobal:upk1@nuh794>
17:25:14 cptglobal:upk1@nuh794> /

VIEW_NAME TEXT_LENGTH LL
------------------------------ ----------- ----------
A 38 38
B 38 38
D 47 38
V 38 38

real: 141

and then changing the constant parameter to the view_name

17:25:15 cptglobal:upk1@nuh794> ed
Wrote file afiedt.buf

1 select view_name, text_length,
2 long_length( 'select text from user_views where view_name = :x', ':x', view_name) ll
3* from user_views
17:25:29 cptglobal:upk1@nuh794> /

VIEW_NAME TEXT_LENGTH LL
------------------------------ ----------- ----------
A 38 38
B 38 38
D 47 47
V 38 38

real: 172
17:25:30 cptglobal:upk1@nuh794>

But all in all not worth keeping you from participating in other discussions. The code did allow me to achieve what I needed

Thanks

Tom Kyte
March 09, 2005 - 7:40 am UTC

ahh - i see, my example was in fact flawed there -- yes, it should have been view_name, not 'V'

I could not see the forest for the trees - thanks for your persistance.

Setting LONG in sqlplus

Kevin, March 09, 2005 - 6:41 pm UTC

and re the memory usage of long I tried the following test:

Constant ARRAYSIZE=100
Varying LONG from 400K, 4M, 40M

I monitored it in a separate session with VMSTAT (I presume that the storage would be assigned to the SQLPlus process).

The results were:

SQL*Plus: Release 9.2.0.5.0 - Production on Thu Mar 10 10:00:33 2005

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production

Array fetch/bind size is 100. (arraysize is 100)
Will commit when done. (copycommit is 0)
Maximum long size is 400000. (long is 400000)
   15192 rows selected from <user>@<sid>.world.
   15192 rows inserted into <user1>.testtable.
   15192 rows committed into <user1>.testtable at DEFAULT HOST connection.

The memory usage was about 7K pages:

         procs           memory                   page                                  r     b     w      avm    free   re   at    pi   po    fr   
1     1     0   766770  103892   61   16     1    0     0       
1     2     0   839759   96823   54    0    37    0     0    
10:01:27 <user>:<sid>@<machine> SQL> exit

SQL*Plus: Release 9.2.0.5.0 - Production on Thu Mar 10 10:01:35 2005

Array fetch/bind size is 100. (arraysize is 100)
Will commit when done. (copycommit is 0)
Maximum long size is 4000000. (long is 4000000)
   15192 rows selected from <user>@<sid>.world.
   15192 rows inserted into <user1>.testtable.
   15192 rows committed into <user1>.testtable at DEFAULT HOST connection.

The memory usage was about 11K pages:

         procs           memory                   page                              
r     b     w      avm    free   re   at    pi   po
1     1     0   783772  102194   96   46    51    0     0       
1     1     0   854008   90844  100    0     1    0     0    
10:02:00 <user>:<sid>@<machine> SQL> exit

Array fetch/bind size is 100. (arraysize is 100)
Will commit when done. (copycommit is 0)
Maximum long size is 40000000. (long is 40000000)
Bus error(coredump)

The memory usage was about 54K pages (and then a coredump):

         procs           memory                   page                                  r     b     w      avm    free   re   at    pi   po    fr
2     1     0   889484  102190   99   32     0    0     0
1     1     0   820099   48684  131    2     1    0     0
    
I'm not going to spend any more time on this as it has given me enough information to proceed - setting LONG varies the amount of memory consumed irrespective of the data being transferred so as far as I can see it is advisable to tailor (within reason) the LONG value to the length of the long columns being transferred. 

In my case running:
  exec show_max_long_length('testtable','longcolumn');

testtable: Maximum Long Length for longcolumn is 30878 (319032 rows scanned)

so the 400000 was more than ample

 

Migrate from LONG to VARCHAR2 data type

Ignatius, March 28, 2005 - 6:00 am UTC

Dear Sir,
How can i Migrate LONG datatype to VARCHAR2 datatype.  This Migration is into another Table.

I tried the following, but it fails, I need your help please.

CREATE PROCEDURE long2varchar IS
  CURSOR getdata IS
  SELECT test_id, long_text FROM long_table_A;
BEGIN
  FOR rec IN getdata LOOP
    insert into table_B values (rec.test_id, rec.varchar_text);
  END LOOP;
  COMMIT;
END;
/

SQL> EXECUTE long2varchar;
BEGIN long2varchar; END;

*
ERROR at line 1:
ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: at "BERUK.LONG2VARCHAR", line 7
ORA-06512: at line 1


Thank you for your time. 

Tom Kyte
March 28, 2005 - 7:54 am UTC

rec.varchar_text := substr( rec.varchar_text, 1, 4000 );

What you might find easier would be:

ops$tkyte@ORA9IR2> create table t ( id number, string varchar2(4000), temp clob );
 
Table created.
 
ops$tkyte@ORA9IR2> insert /*+ append */ into t select rownum, null, to_lob(text) from all_views;
 
2253 rows created.
 
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2> update t set string = dbms_lob.substr(temp,4000,1), temp = null;
 
2253 rows updated.
 
ops$tkyte@ORA9IR2> alter table t drop column temp;
 
Table altered.
 

reader

A reader, April 07, 2005 - 12:44 pm UTC

It seems right passing LONG to TO_LOG
I am getting ORA-00932.
Could you tell me how to prevent ORA-00932 error

Thanks


SQL> select to_lob(TRIGGER_BODY) from dba_triggers where rownum = 1;
select to_lob(TRIGGER_BODY) from dba_triggers where rownum = 1
       *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got LONG

SQL> desc dba_triggers
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 TRIGGER_NAME                                       VARCHAR2(30)
 TRIGGER_TYPE                                       VARCHAR2(16)
 TRIGGERING_EVENT                                   VARCHAR2(227)
 TABLE_OWNER                                        VARCHAR2(30)
 BASE_OBJECT_TYPE                                   VARCHAR2(16)
 TABLE_NAME                                         VARCHAR2(30)
 COLUMN_NAME                                        VARCHAR2(4000)
 REFERENCING_NAMES                                  VARCHAR2(128)
 WHEN_CLAUSE                                        VARCHAR2(4000)
 STATUS                                             VARCHAR2(8)
 DESCRIPTION                                        VARCHAR2(4000)
 ACTION_TYPE                                        VARCHAR2(11)
 TRIGGER_BODY                                       LONG
 

Tom Kyte
April 07, 2005 - 12:55 pm UTC

to_lob is documented to only work in modification statements -- to MOVE a long to a LOB on disk.

Jack, April 29, 2005 - 3:26 pm UTC

In 9i, which one do you prefer?

alter table x modify ( c clob );

or

insert /*+ append*/ into t_lob (l)
select to_lob(l) from t_long;

I think the latter'd be more efficient and less resource intensive.

Tom Kyte
April 29, 2005 - 6:41 pm UTC

but the latter has the lob in the wrong place doesn't it.

Jack, April 29, 2005 - 6:49 pm UTC

I could rename t_lob once the direct loading is done?

Is it worth tring the second approach should it run significantly faster?

Tom Kyte
April 29, 2005 - 7:12 pm UTC

rename, reindex, reconstraint, regrant, re-everything.

That would be up to you.

Jack, April 29, 2005 - 7:29 pm UTC

You didn't answer my question.

Would direct load run faster than 'alter table modify'?

Tom Kyte
April 29, 2005 - 7:47 pm UTC

because *IT DEPENDS*

do I have to re-enable 0 constraints or 10
do I have to re-create 0 indexes or 20
do I have to ............


you need to look at ALL of the steps, not just one.

OK

Raju, May 24, 2005 - 1:13 pm UTC

Hi Tom,
i)How to compute the length of Long column??
ii)can a LOB be converted to a Long??
Please do reply.


Tom Kyte
May 24, 2005 - 1:49 pm UTC

1) you have to read it "out" to figure out how long it is. you can use dbms_sql to do this -- it has a piecewise long api for reading.
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:834827040715 <code>


2) that is the wrong direction, but no, there is no function to do that. You would have to read it out into a program and insert it back in.

I have a curious question

A reader, July 15, 2005 - 5:43 am UTC

Hi

Oracle has been telling the users not to use LONG since Oracle 8 but even now in Oracle 10g (after 7 years) data dictionary still make extensive use of LONG datatype. How so? Does it give better performance?

Cheers

Tom Kyte
July 15, 2005 - 8:04 am UTC

sometimes the answer is "just because", that is the only answer I have for that.

Concatenate a long

Mohini, August 29, 2005 - 12:26 am UTC

Database: 9i (Release 2)

Tom,
I am trying to concatenate a long column (of a data dictionary object)..and getting inconsistent data type error.
Is there a way I can accomplish this..

Here is my script:

me@practice> create table test (geniuses varchar2 (30) default 'TOM');

Table created.

me@practice> Select 'Default ' || DATA_DEFAULT
2 FROM SYS.dba_TAB_COLUMNs
3 where table_name ='TEST';
Select 'Default ' || DATA_DEFAULT
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected NUMBER got LONG


me@practice>

Tom Kyte
August 29, 2005 - 1:12 am UTC

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:665224430110
shows a longsubstr that'll probably work (less than 32k)

for something more generic, short extract from forthcoming book regarding this exactly:


<quote>
Coping with Legacy LONG Types
A question that arises frequently is, “What about the data dictionary in Oracle?” It is littered with LONG columns, and this makes using the dictionary columns problematic. For example, it is not possible using SQL to search the ALL_VIEWS dictionary view to find all views that contain the text HELLO:
ops$tkyte@ORA10G> select *
  2  from all_views
  3  where text like '%HELLO%';
where text like '%HELLO%'
      *
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected NUMBER got LONG
This issue is not limited to the ALL_VIEWS view; many views are affected:
ops$tkyte@ORA10G> select table_name, column_name
  2  from dba_tab_columns
  3  where data_type in ( 'LONG', 'LONG RAW' )
  4  and owner = 'SYS'
  5  and table_name like 'DBA%';
 
TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------
DBA_VIEWS                      TEXT
DBA_TRIGGERS                   TRIGGER_BODY
DBA_TAB_SUBPARTITIONS          HIGH_VALUE
DBA_TAB_PARTITIONS             HIGH_VALUE
DBA_TAB_COLUMNS                DATA_DEFAULT
DBA_TAB_COLS                   DATA_DEFAULT
DBA_SUMMARY_AGGREGATES         MEASURE
DBA_SUMMARIES                  QUERY
DBA_SUBPARTITION_TEMPLATES     HIGH_BOUND
DBA_SQLTUNE_PLANS              OTHER
DBA_SNAPSHOTS                  QUERY
DBA_REGISTERED_SNAPSHOTS       QUERY_TXT
DBA_REGISTERED_MVIEWS          QUERY_TXT
DBA_OUTLINES                   SQL_TEXT
DBA_NESTED_TABLE_COLS          DATA_DEFAULT
DBA_MVIEW_ANALYSIS             QUERY
DBA_MVIEW_AGGREGATES           MEASURE
DBA_MVIEWS                     QUERY
DBA_IND_SUBPARTITIONS          HIGH_VALUE
DBA_IND_PARTITIONS             HIGH_VALUE
DBA_IND_EXPRESSIONS            COLUMN_EXPRESSION
DBA_CONSTRAINTS                SEARCH_CONDITION
DBA_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
 
23 rows selected.
So, what is the solution? If you want to make use of these columns in SQL, then you’ll need to convert them to a SQL-friendly type. You can use a user-defined function for doing so. The following example demonstrates how to accomplish this using a LONG SUBSTR function that will allow you to effectively convert any 4,000 bytes of a LONG type into a VARCHAR2, for use with SQL. When you are done, you’ll be able to query:
ops$tkyte@ORA10G> select *
  2    from (
  3  select owner, view_name,
  4         long_help.substr_of( 'select text
  5                                 from dba_views
  6                                where owner = :owner
  7                                  and view_name = :view_name',
  8                               1, 4000,
  9                               'owner', owner,
 10                               'view_name', view_name ) substr_of_view_text
 11    from dba_views
 12   where owner = user
 13         )
 14   where upper(substr_of_view_text) like '%INNER%'
 15  /
You’ve converted the first 4,000 bytes of the VIEW_TEXT column from LONG to VARCHAR2 and can now use a predicate on it. Using the same technique, you could implement your own INSTR, LIKE, and so forth for LONG types as well. In this book, I’ll only demonstrate how to get the substring of a LONG type.
The package we will implement has the following specification:
ops$tkyte@ORA10G> create or replace package long_help
  2  authid current_user
  3  as
  4      function substr_of
  5      ( p_query in varchar2,
  6        p_from  in number,
  7        p_for   in number,
  8        p_name1 in varchar2 default NULL,
  9        p_bind1 in varchar2 default NULL,
 10        p_name2 in varchar2 default NULL,
 11        p_bind2 in varchar2 default NULL,
 12        p_name3 in varchar2 default NULL,
 13        p_bind3 in varchar2 default NULL,
 14        p_name4 in varchar2 default NULL,
 15        p_bind4 in varchar2 default NULL )
 16      return varchar2;
 17  end;
 18  /
Package created.
Note that on line 2, we specify AUTHID CURRENT_USER. This makes the package run as the invoker, with all roles and grants in place. This is important for two reasons. First, we’d like the database security to not be subverted—this package will only return substrings of columns we (the invoker) are allowed to see. Second, we’d like to install this package once in the database and have its functionality available for all to use; using invoker rights allows us to do that. If we used the default security model of PL/SQL—definer rights—the package would run with the privileges of the owner of the package, meaning it would only be able to see data the owner of the package could see, which may not include the set of data the invoker is allowed to see.
The concept behind the function SUBSTR_OF is to take a query that selects at most one row and one column: the LONG value we are interested in. SUBSTR_OF will parse that query if needed, bind any inputs to it, and fetch the results programmatically, returning the necessary piece of the LONG value.
The package body, the implementation, begins with two global variables. The G_CURSOR variable holds a persistent cursor open for the duration of our session. This is to avoid having to repeatedly open and close the cursor and to avoid parsing SQL more than we need to. The second global variable, G_QUERY, is used to remember the text of the last SQL query we’ve parsed in this package. As long as the query remains constant, we’ll just parse it once. So, even if we query 5,000 rows in a query, as long as the SQL query we pass to this function doesn’t change, we’ll have only one parse call:
ops$tkyte@ORA10G> create or replace package body long_help
  2  as
  3
  4      g_cursor number := dbms_sql.open_cursor;
  5      g_query  varchar2(32765);
  6
Next in this package is a private procedure, BIND_VARIABLE, that we’ll use to bind inputs passed to us by the caller. We implemented this as a separate private procedure only to make life easier; we want to bind only when the input name is NOT NULL. Rather than perform that check four times in the code for each input parameter, we do it once in this procedure:
  7  procedure bind_variable( p_name in varchar2, p_value in varchar2 )
  8  is
  9  begin
 10      if ( p_name is not null )
 11      then
 12          dbms_sql.bind_variable( g_cursor, p_name, p_value );
 13      end if;
 14  end;
 15
Next is the actual implementation of SUBSTR_OF in the package body. This routine begins with a function declaration from the package specification and the declaration for some local variables. L_BUFFER will be used to return the value, and L_BUFFER_LEN will be used to hold the length returned by an Oracle-supplied function:
 16
 17  function substr_of
 18  ( p_query in varchar2,
 19    p_from  in number,
 20    p_for   in number,
 21    p_name1 in varchar2 default NULL,
 22    p_bind1 in varchar2 default NULL,
 23    p_name2 in varchar2 default NULL,
 24    p_bind2 in varchar2 default NULL,
 25    p_name3 in varchar2 default NULL,
 26    p_bind3 in varchar2 default NULL,
 27    p_name4 in varchar2 default NULL,
 28    p_bind4 in varchar2 default NULL )
 29  return varchar2
 30  as
 31      l_buffer       varchar2(4000);
 32      l_buffer_len   number;
 33  begin
Now, the first thing our code does is a sanity check on the P_FROM and P_FOR inputs. P_FROM must be a number greater than or equal to 1, and P_FOR must be between 1 and 4,000—just like the built-in function SUBSTR:
 34      if ( nvl(p_from,0) <= 0 )
 35      then
 36          raise_application_error
 37          (-20002, 'From must be >= 1 (positive numbers)' );
 38      end if;
 39      if ( nvl(p_for,0) not between 1 and 4000 )
 40      then
 41          raise_application_error
 42          (-20003, 'For must be between 1 and 4000' );
 43      end if;
 44
Next, we’ll check to see if we are getting a new query that needs to be parsed. If the last query we parsed is the same as the current query, we can skip this step. It is very important to note that on line 47 we are verifying that the P_QUERY passed to us is just a SELECT—we will use this package only to execute SQL SELECT statements. This check validates that for us:
 45      if ( p_query <> g_query or g_query is NULL )
 46      then
 47          if ( upper(trim(nvl(p_query,'x'))) not like 'SELECT%')
 48          then
 49              raise_application_error
 50              (-20001, 'This must be a select only' );
 51          end if;
 52          dbms_sql.parse( g_cursor, p_query, dbms_sql.native );
 53          g_query := p_query;
 54      end if;
We are ready to bind the inputs to this query. Any non-NULL names that were passed to us will be “bound” to the query, so when we execute it, it finds the right row:
 55      bind_variable( p_name1, p_bind1 );
 56      bind_variable( p_name2, p_bind2 );
 57      bind_variable( p_name3, p_bind3 );
 58      bind_variable( p_name4, p_bind4 );
 59
And now we can execute the query and fetch the row. Then using DBMS_SQL.COLUMN_VALUE_LONG, we extract the necessary substring of the LONG and return it:
 60      dbms_sql.define_column_long(g_cursor, 1);
 61      if (dbms_sql.execute_and_fetch(g_cursor)>0)
 62      then
 63          dbms_sql.column_value_long
 64          (g_cursor, 1, p_for, p_from-1,
 65           l_buffer, l_buffer_len );
 66      end if;
 67      return l_buffer;
 68  end substr_of;
 69
 70  end;
 71  / 
Package body created.
That’s it—you should be able to use that package against any legacy LONG column in your database, allowing you to perform many WHERE clause operations that were not possible before. For example, you can now find all partitions in your schema such that the HIGH_VALUE has the year 2003 in it:
ops$tkyte@ORA10G> select *
  2    from (
  3  select table_owner, table_name, partition_name,
  4         long_help.substr_of
  5         ( 'select high_value
  6              from all_tab_partitions
  7             where table_owner = :o
  8               and table_name = :n
  9               and partition_name = :p',
 10            1, 4000,
 11            'o', table_owner,
 12            'n', table_name,
 13            'p', partition_name ) high_value
 14    from all_tab_partitions
 15   where table_name = 'T'
 16     and table_owner = user
 17         )
 18   where high_value like '%2003%'
 19  /
 
TABLE_OWN TABLE PARTIT HIGH_VALUE
--------- ----- ------ ------------------------------
OPS$TKYTE T     PART1  TO_DATE(' 2003-03-13 00:00:00'
                       , 'SYYYY-MM-DD HH24:MI:SS', 'N
                       LS_CALENDAR=GREGORIAN')
 
OPS$TKYTE T     PART2  TO_DATE(' 2003-03-14 00:00:00'
                       , 'SYYYY-MM-DD HH24:MI:SS', 'N
                       LS_CALENDAR=GREGORIAN')
Using this same technique—that of processing the result of a query that returns a single row with a single LONG column in a function—you can implement your own INSTR, LIKE, and so on as needed. 
This implementation works well on the LONG type, but will not work on LONG RAW types. LONG RAWs are not piecewise accessible (there is no COLUMN_VALUE_LONG_RAW function in DBMS_SQL). Fortunately, this is not too serious of a restriction since LONG RAWs are not used in the dictionary and the need to “substring” so you can search on it is rare. If you do have a need to do so, however, you will not use PL/SQL unless the LONG RAW is 32KB or less, as there is simply no method for dealing with LONG RAWs over 32KB in PL/SQL itself. Java, C, C++, Visual Basic, or some other language would have to be used.
Another approach is to temporarily convert the LONG or LONG RAW into a CLOB or BLOB using the TO_LOB built-in function and a global temporary table. Your PL/SQL procedure could be as follows:
Insert into global_temp_table ( blob_column ) 
select to_lob(long_raw_column) from t where…
This would work well in an application that occasionally needs to work with a single LONG RAW value. You would not want to continuously do that, however, due to the amount of work involved. If you find yourself needing to resort to this technique frequently, you should definitely convert the LONG RAW to a BLOB once and be done with it.
 

Thanks Tom.

Abhishek Gohil, August 30, 2005 - 10:18 am UTC

The information was really useful and helpful. Thanks a lot again.

Gateway varchar/text/long problem

jc, September 29, 2005 - 4:58 pm UTC

I'm using Oracle's gateway product to bring in data from MS SQL/Server. These MS developers have really made my life hard by using the max length of 8,000 for almost every varchar field. Reading through tars I've found that there is a gateway "limitation" of 255 chars. In order to handle bigger lengths you have to cast it to "text". OK, now that you do that it comes across the dblink as a long datatype. Now, as far as I can tell there is absolutly nothing I can do with it from within my Oracle instance because it's a "remote database". Now I have to use OCI or ProC* to get to the data. No question here, just FYI for anyone needing and bad answer.

Tom Kyte
September 30, 2005 - 8:44 am UTC

plsql can deal with longs upto 32k ? no need for oci or pro*c here.

remote longs

jc, September 30, 2005 - 10:05 am UTC

Even from a remote DB? I beleive gateway is considered remote.

Tom Kyte
September 30, 2005 - 11:16 am UTC

did you try it.


scott@ORA9IR2> @test
scott@ORA9IR2> declare
2 l_max number := 0;
3 begin
4 for x in ( select y from t@ora10g )
5 loop
6 if (length(x.y) > l_max)
7 then
8 l_max := length(x.y);
9 end if;
10 end loop;
11 dbms_output.put_line( 'longest was ' || l_max );
12 end;
13 /
longest was 32760

PL/SQL procedure successfully completed.



remote long

jc, September 30, 2005 - 12:26 pm UTC

SQL> 
SQL> SET serveroutput ON
SQL> DECLARE
  2    v_num NUMBER:=0;
  3  BEGIN
  4    FOR x IN (SELECT "Description" AS y
  5              FROM FI_ACTIVITYSTATUS@crmlnk) LOOP
  6      v_num := greatest(v_num,length(x.y));
  7    END LOOP;
  8    dbms_output.put_line(v_num);
  9  END;
 10  /

6

PL/SQL procedure successfully completed

SQL> INSERT INTO FI.FI_ACTIVITYSTATUS
  2        (ACTIVITYSTATUSCODE
  3        ,DESCRIPTION)
  4  SELECT "ActivityStatusCode" as ACTIVITYSTATUSCODE
  5  ,      "Description" as DESCRIPTION
  6  FROM FI_ACTIVITYSTATUS@crmlnk;

INSERT INTO FI.FI_ACTIVITYSTATUS
      (ACTIVITYSTATUSCODE
      ,DESCRIPTION)
SELECT "ActivityStatusCode" as ACTIVITYSTATUSCODE
,      "Description" as DESCRIPTION
FROM FI_ACTIVITYSTATUS@crmlnk

ORA-00997: illegal use of LONG datatype

SQL>  

Tom Kyte
September 30, 2005 - 2:07 pm UTC

you cannot use longs in SQL link that (local or remote in fact) but plsql worked...

remote longs

jc, September 30, 2005 - 5:18 pm UTC

Yea, however, looping through millions of records across several tables each night isn't the solution I was counting on. I don't think my time constraint will allow for this.

Tom Kyte
October 01, 2005 - 8:31 pm UTC

my point -- it is not an oci/proc thing, it is a plsql thing - you made the statement

Now I have to use OCI or ProC* to get to the data.


but you do not (and plsql would likely be faster than oci or proc in this case)

LONG TO CLOB

ravs, May 18, 2006 - 2:57 am UTC

Hi Tom,
This forum is very useful.
I have a table with LONG in 8.1.7.4 with 40cpus on solaris 5.3.
we are upgrading it to 9iR2. when we are trying to convert LONG to CLOB using the following approaches.
Before that i have removed all the chined rows
1) ALTER TABLE
2) insert append parallel
3) export and import ( with different buffer sizes and commit=y and also with commit=N)
As the the table size is very huge around 13.2GB. These are taking a lot of time(around ranging from 22 hrs to 33hrs).
Is there any way to reduce the time further ?
The client wanted to reduce it to below 4hrs.
- Thanks in advance
ravi


Tom Kyte
May 19, 2006 - 9:06 am UTC

how could you remove the chained rows - how long did that take (longs almost always imply chained rows as they are stored inline)

parallel create table as select might be what you are looking for - however, you will be constrained by your IO system more than anything in all likelyhood

LONG to CLOB

ravs, May 22, 2006 - 5:15 am UTC

Thank you very much Tom for your reply.
a) This is how how got rid of chained rows.
create chained table with $ORACLE_HOME/rdbms/admin/utlchain.sql and analyzed the original table list into chained table. ( The database was migrated from 8i to 9i with LONG columns intact prior to this). created a temp table (using CTAS) with all the chained rows into it. It took around 4 minutes(min.). Then deleted chained rows from original and inserted rows from this temp table. Again analyzed the original table and found chained rows to be zero. But the number of blocks occupied were same.
so we have taken an export of the table and imported back which reduced the number blocks occupied to 0.95Millions. Earlier it was 1.6Million blocks. This import took around 2 and half hours.

b) Yes Tom, as you have suggested, need to try CTAS with parallel and nologging.

c) constrained by system IO? You mean OS IO. If that is the case most probably i need to dig more into these details. Do you suggest any setting for the improvement of OS IO?
- Thanks in advance,
ravi

Tom Kyte
May 22, 2006 - 7:57 am UTC

if you did that, then your longs are not very long at all - in fact, I'd almost bet they were all 4,000 characters or less. Else they would be CHAINED rows as well.

I wonder, how did you do a create table as select with LONG COLUMNS? You are confusing me here.....


LONG to CLOB

ravs, May 23, 2006 - 12:35 am UTC

Tom,
sorry for confusing you.
i used "create table junks as
select col1,col2,to_lob(col3) from t where rowid in (select head_rowid from chained_rows)".
"delete from t where rowid in (select head_rowid from chained_rows)"
and then inserted these rows back to original table just like insert /*+ append */ into t select col1,col2,col3 from junks. It worked. Then i exported the table and imported it back. Now i analyzed the table (full no estimate used) and found that the chained rows to be zero and number of blocks reduced to 915950. Earlier it used to be more than 1.6Million blocks. All this process took around 2 and half hours.
Note that my db block size is 8k.
Now started converting the Long in the table to Clob.
For this i tried alter table modify which was running and running...
then i
created a seperate table like create table junks as select rownum sno,rowid id from t(table with long) and created a unique index on sno,id.
and then tried this option
create or replace procedure test_case
(
i_start number,
i_end number
)
as
startno number := i_start;
endno number ;
tailno number :=i_end ;
while (startno <= tailno) loop
endno := startno + 10000 ;
insert /*+ append */ into temptbl
select col1,col2,...to_lob(col10).... from t
where rowid in (select id from junks where sno between startno and endno ) ;
commit ;
startno = endno + 1;
end loop ;
exception
....
end ;

i started 10 sqlplus sessions and ran this procedure for different inputs like
exec test_case(1,200000);
.
.
.
exec test_case(1800001,2005017); 10th session
it took almost 52 hours to insert 1.8M rows before my other counter part killed that unknowingly.
when i analyzed this temp table i found it occupied around 0.45M blocks only and chined count is zero.
There are longs size more than 4000bytes. even i saw rows with size of 6000 odd bytes also (may be very less in number).
Block size is 8k.
some other inputs...
the original table has around 2Million rows and 135 columns one of which is LONG.

Please advice
- Thanks in advance
ravi


Tom Kyte
May 23, 2006 - 7:30 am UTC

I don't know why you would do this using any procedural code.

Either just use create table as select with to_lob
or insert /*+ append */ the whole lot of them in a single sql
or alter table modify to convert the long to a lob (if you are on current software)


No way I would use procedural code. Might through in parallel create table as select or parallel DML

Long to Clob

ravs, May 29, 2006 - 6:01 am UTC

Dear Tom,
Thanks alot for all the inputs.
I tried each and every option. Nothing is coming less than 22 hours.
i have 24 CPU box with 36GB Memory on sunsolaris
I tried the following listed option
1) CTAS with parallel option
2) insert append with parallel option.

Other question i have is can i used bulk binds with LONG option? it is throwing ORA-3113 - end of file communication channel. My oracle version is "Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production".

As you have pointed out rightly, most of the times i was constrained by system IO. I don't know how to solve it.
Another intersting point i observed is while running the test i saw high waits on
1) direct path read/write(lob) (due to lob column which is placed on different tablespace locally managed)
2) enqueue wait (locks on blocks)
3) wait on db file sequential read.(index read or rowid read).

can you please suggest how can i minimize them?
My table with Long is of size 13.5GB placed on different datafiles and on different disks.
My destination Clob table is also placed on different datafiles and on different disks.
I know it supports alter table modify,but it is taking days (i killed it after 52 hours).

please please suggest Tom.
Waiting for your kind help
- Thanks
ravi


Tom Kyte
May 30, 2006 - 8:14 am UTC

and did you verify that things are actually taking place in parallel.

And do you have the IO bandwidth that would be necessary to make this faster (eg: isn't so much a function of CPU and memory - more of a function of "how fast can I move the bits from this disk to that disk")

3113 = contact support, HOWEVER, bulk binding isn't really compatible with either of

a) fetching longs
b) creating lobs


If you are constrained by IO - as you just said - increasing IO bandwidth is going to be, well, paramount if you want to "go faster". I don't care how much ram you have, how much cpu you have, if you cannot give the IO resources - you won't go any faster.

We would expect "high waits" on direct path read/write of lob, that is in fact all you are doing (besides reading the longs of course)

As for the enqueue waits, you would want to look into that - if you were the only game in town, one would not expect that. Who or What were you contending with and what type of enqueue waits were you waiting on exactly (big possible list)

If you had lots of db file sequential reads - you would want to find out on what segment - that could just be the lob maintanence - or it could be that the query you used to read the longs in the first place was "bad" (used an index and shouldn't) or it could be the retrieval of the longs that span multiple blocks.


I don't see how a tiny bit of data (13.5gig) would take 22 hours - so, show us, the original table structure, the query plan used and the create table as select statement.

Thanks Tom

ravs, May 30, 2006 - 11:48 pm UTC

Dear Tom,
As you have pointed out
1)I have verified the fact that parallel processing is taking place. i see 20 process p000 to p019 running and each process executing a statement like this.
SELECT /*+ Q147000 NO_EXPAND ROWID(A1) */ A1."OBJID",A1."TITLE",A1."S_TITLE",A1."ID_NUMBER",A1."CREATION_TIME",A1."INTERNAL_CASE",A1."HANGUP_TIME",A1."ALT_PHONE_NUM",A1."PHONE_NUM",A1."PICKUP_EXT",A1."CASE_HISTORY",A1."TOPICS_TITLE",A1."YANK_FLAG",A1."SERVER_STATUS",A1."SUPPORT_TYPE",A1."WARRANTY_FLAG",A1."SUPPORT_MSG",A1."ALT_FIRST_NAME",A1."ALT_LAST_NAME",A1."ALT_FAX_NUMBER",A1."ALT_E_MAIL",A1."ALT_SITE_NAME",A1."ALT_ADDRESS",A1."ALT_CITY",A1."ALT_STATE",A1."ALT_ZIPCODE",A1."FCS_CC_NOTIFY",A1."SYMPTOM_CODE",A1."CURE_CODE",A1."SITE_TIME",A1."ALT_PROD_SERIAL",A1."MSG_WAIT_COUNT",A1."REPLY_WAIT_COUNT",A1."REPLY_STATE",A1."OPER_SYSTEM",A1."CASE_SUP_TYPE",A1."PAYMENT_METHOD",A1."REF_NUMBER",A1."DOA_CHECK_BOX",A1."CUSTOMER_SATIS",A1."CUSTOMER_CODE",A1."SERVICE_ID",A1."ALT_PHONE",A1."FORWARD_CHECK",A1."CCLIST1",A1."CCLIST2",A1."KEYWORDS",A1."OWNERSHIP_STMP",A1."MODIFY_STMP",A1."DIST",A1."ARCH_IND",A1."IS_SUPERCASE",A1."DEV",A1."X_FAULTTOCSS",A1."X_FAULT_REP_ORIG",A1."X_SPECIAL_NOTE",A1."X_FAULT_LOC_CODE",A1."X_CUST_ADV_CLEAR",A1."X_OST_DURATION",A1."X_CUST_ADV_TIME",A1."X_RESTORE_TIME",A1."X_RESP_TIME",A1."X_REPORT_TIME",A1."X_STAGE",A1."X_ESC_LEVEL",A1."X_WM_STATUS",A1."X_REFUND_DUE",A1."X_TOTAL_JEOPARDY",A1."X_STAGE_JEOPARDY",A1."X_FAULT_CODE",A1."X_WORKMANAGER_DATA",A1."X_CLEAR_DETAILS",A1."X_FAULT_TYPE",A1."X_CLEAR_DESC",A1."X_FAILURE_ANALYSIS",A1."X_OST_RESPONSE",A1."X_OST_RESTORE",A1."X_OST_CLEAR",A1."X_CUST_CONTACT_PERMISSION",A1."X_NON_LIVE_FLAG",A1."X_CLK_RESTORE",A1."X_CLK_CLEAR",A1."X_CLK_RESPONSE",A1."X_CASE_TOTAL_JEOPARDY",A1."X_CASE_TOTAL_FAILURE",A1."X_CASE_STAGE_JEOPARDY",A1."X_CASE_STAGE_FAILURE",A1."X_CASE_TARGET_TIME",A1."X_STAGE_TARGET_TIME",A1."X_CASE_JEOPARDY_TIME",A1."X_STAGE_JEOPARDY_TIME",A1."X_UNPARK_TIME",A1."CASE_SOLN2WORKAROUND",A1."CASE_PREVQ2QUEUE",A1."CASE_CURRQ2QUEUE",A1."CASE_WIP2WIPBIN",A1."CASE_LOGIC2PROG_LOGIC",A1."CASE_OWNER2USER",A1."CASE_STATE2CONDITION",A1."CASE_ORIGINATOR2USER",A1."CASE_EMPL2EMPLOYEE",A1."CALLTYPE2GBST_ELM",A1."RESPPRTY2GBST_ELM",A1."RESPSVRTY2GBST_ELM",A1."CASE_PROD2SITE_PART",A1."CASE_REPORTER2SITE",A1."CASE_REPORTER2CONTACT",A1."ENTITLEMENT2CONTRACT",A1."CASESTS2GBST_ELM",A1."CASE_RIP2RIPBIN",A1."COVRD_PPI2SITE_PART",A1."CASE_DISTR2SITE",A1."CASE2ADDRESS",A1."CASE_NODE2SITE_PART",A1."DE_PRODUCT2SITE_PART",A1."CASE_PRT2PART_INFO",A1."DE_PRT2PART_INFO",A1."ALT_CONTACT2CONTACT",A1."TASK2OPPORTUNITY",A1."CASE2LIFE_CYCLE",A1."CASE_VICTIM2CASE",A1."ENTITLE2CONTR_ITM",A1."X_OWNED_AT2WORKGROUP",A1."X_CUST_HANDLING2WORKGROUP",A1."X_THIRD_CONTACT2CONTACT",A1."X_CASE2OST",A1."X_FLT_FRP2USER_DATA_ELM",A1."X_RECEP_FRP2USER_DATA_ELM",A1."X_MSF_CASE2NFM_TROUBLE_TICKET",A1."X_ECO_CASE_ID",A1."X_CASE2REASONCODE",A1."X_EXT_CASE_ID",A1."X_EXT_SYS_ID",A1."X_PROC_NET_UPDATE",A1."X_LLU_FLAG" FROM "xxxx"."TABLE_xxxx" PX_GRANULE(0, BLOCK_RANGE, DYNAMIC) A1
2) i am the only person using the test and no other processes are running except my process.

This the original table structure.
CREATE TABLE TABLE_XXXX
(
OBJID NUMBER,
TITLE VARCHAR2(80 BYTE),
S_TITLE VARCHAR2(80 BYTE),
ID_NUMBER VARCHAR2(255 BYTE),
CREATION_TIME DATE,
INTERNAL_CASE NUMBER,
HANGUP_TIME DATE,
ALT_PHONE_NUM VARCHAR2(20 BYTE),
PHONE_NUM VARCHAR2(20 BYTE),
PICKUP_EXT VARCHAR2(8 BYTE),
CASE_HISTORY LONG,
TOPICS_TITLE VARCHAR2(255 BYTE),
YANK_FLAG NUMBER,
SERVER_STATUS VARCHAR2(2 BYTE),
SUPPORT_TYPE VARCHAR2(2 BYTE),
WARRANTY_FLAG VARCHAR2(2 BYTE),
SUPPORT_MSG VARCHAR2(80 BYTE),
ALT_FIRST_NAME VARCHAR2(30 BYTE),
ALT_LAST_NAME VARCHAR2(30 BYTE),
ALT_FAX_NUMBER VARCHAR2(20 BYTE),
ALT_E_MAIL VARCHAR2(80 BYTE),
ALT_SITE_NAME VARCHAR2(80 BYTE),
ALT_ADDRESS VARCHAR2(200 BYTE),
ALT_CITY VARCHAR2(30 BYTE),
ALT_STATE VARCHAR2(30 BYTE),
ALT_ZIPCODE VARCHAR2(20 BYTE),
FCS_CC_NOTIFY NUMBER,
SYMPTOM_CODE VARCHAR2(10 BYTE),
CURE_CODE VARCHAR2(10 BYTE),
SITE_TIME DATE,
ALT_PROD_SERIAL VARCHAR2(30 BYTE),
MSG_WAIT_COUNT NUMBER,
REPLY_WAIT_COUNT NUMBER,
REPLY_STATE NUMBER,
OPER_SYSTEM VARCHAR2(20 BYTE),
CASE_SUP_TYPE VARCHAR2(2 BYTE),
PAYMENT_METHOD VARCHAR2(30 BYTE),
REF_NUMBER VARCHAR2(80 BYTE),
DOA_CHECK_BOX NUMBER,
CUSTOMER_SATIS NUMBER,
CUSTOMER_CODE VARCHAR2(20 BYTE),
SERVICE_ID VARCHAR2(30 BYTE),
ALT_PHONE VARCHAR2(20 BYTE),
FORWARD_CHECK NUMBER,
CCLIST1 VARCHAR2(255 BYTE),
CCLIST2 VARCHAR2(255 BYTE),
KEYWORDS VARCHAR2(255 BYTE),
OWNERSHIP_STMP DATE,
MODIFY_STMP DATE,
DIST NUMBER,
ARCH_IND NUMBER,
IS_SUPERCASE NUMBER,
DEV NUMBER,
X_FAULTTOCSS VARCHAR2(1 BYTE),
X_FAULT_REP_ORIG VARCHAR2(1 BYTE),
X_SPECIAL_NOTE VARCHAR2(80 BYTE),
X_FAULT_LOC_CODE VARCHAR2(2 BYTE),
X_CUST_ADV_CLEAR VARCHAR2(1 BYTE),
X_OST_DURATION NUMBER,
X_CUST_ADV_TIME DATE,
X_RESTORE_TIME DATE,
X_RESP_TIME DATE,
X_REPORT_TIME DATE,
X_STAGE VARCHAR2(30 BYTE),
X_ESC_LEVEL VARCHAR2(2 BYTE),
X_WM_STATUS VARCHAR2(3 BYTE),
X_REFUND_DUE VARCHAR2(1 BYTE),
X_TOTAL_JEOPARDY VARCHAR2(2 BYTE),
X_STAGE_JEOPARDY VARCHAR2(2 BYTE),
X_FAULT_CODE VARCHAR2(2 BYTE),
X_WORKMANAGER_DATA VARCHAR2(255 BYTE),
X_CLEAR_DETAILS VARCHAR2(2 BYTE),
X_FAULT_TYPE VARCHAR2(2 BYTE),
X_CLEAR_DESC VARCHAR2(21 BYTE),
X_FAILURE_ANALYSIS VARCHAR2(3 BYTE),
X_OST_RESPONSE NUMBER,
X_OST_RESTORE NUMBER,
X_OST_CLEAR NUMBER,
X_CUST_CONTACT_PERMISSION NUMBER,
X_NON_LIVE_FLAG NUMBER,
X_CLK_RESTORE NUMBER,
X_CLK_CLEAR NUMBER,
X_CLK_RESPONSE NUMBER,
X_CASE_TOTAL_JEOPARDY NUMBER,
X_CASE_TOTAL_FAILURE NUMBER,
X_CASE_STAGE_JEOPARDY NUMBER,
X_CASE_STAGE_FAILURE NUMBER,
X_CASE_TARGET_TIME DATE,
X_STAGE_TARGET_TIME DATE,
X_CASE_JEOPARDY_TIME DATE,
X_STAGE_JEOPARDY_TIME DATE,
X_UNPARK_TIME DATE,
CASE_SOLN2WORKAROUND NUMBER,
CASE_PREVQ2QUEUE NUMBER,
CASE_CURRQ2QUEUE NUMBER,
CASE_WIP2WIPBIN NUMBER,
CASE_LOGIC2PROG_LOGIC NUMBER,
CASE_OWNER2USER NUMBER,
CASE_STATE2CONDITION NUMBER,
CASE_ORIGINATOR2USER NUMBER,
CASE_EMPL2EMPLOYEE NUMBER,
CALLTYPE2GBST_ELM NUMBER,
RESPPRTY2GBST_ELM NUMBER,
RESPSVRTY2GBST_ELM NUMBER,
CASE_PROD2SITE_PART NUMBER,
CASE_REPORTER2SITE NUMBER,
CASE_REPORTER2CONTACT NUMBER,
ENTITLEMENT2CONTRACT NUMBER,
CASESTS2GBST_ELM NUMBER,
CASE_RIP2RIPBIN NUMBER,
COVRD_PPI2SITE_PART NUMBER,
CASE_DISTR2SITE NUMBER,
CASE2ADDRESS NUMBER,
CASE_NODE2SITE_PART NUMBER,
DE_PRODUCT2SITE_PART NUMBER,
CASE_PRT2PART_INFO NUMBER,
DE_PRT2PART_INFO NUMBER,
ALT_CONTACT2CONTACT NUMBER,
TASK2OPPORTUNITY NUMBER,
CASE2LIFE_CYCLE NUMBER,
CASE_VICTIM2CASE NUMBER,
ENTITLE2CONTR_ITM NUMBER,
X_OWNED_AT2WORKGROUP NUMBER,
X_CUST_HANDLING2WORKGROUP NUMBER,
X_THIRD_CONTACT2CONTACT NUMBER,
X_CASE2OST NUMBER,
X_FLT_FRP2USER_DATA_ELM NUMBER,
X_RECEP_FRP2USER_DATA_ELM NUMBER,
X_MSF_CASE2NFM_TROUBLE_TICKET NUMBER,
X_ECO_CASE_ID VARCHAR2(22 BYTE),
X_CASE2REASONCODE NUMBER,
X_EXT_CASE_ID VARCHAR2(22 BYTE),
X_EXT_SYS_ID VARCHAR2(7 BYTE),
X_PROC_NET_UPDATE NUMBER,
X_LLU_FLAG VARCHAR2(1 BYTE)
);
This has around 2M records
I know the longs should be at the end. But this is a product and the table strucutre was created by the vendor like this. all X_columns are customized columns(not in the product). I cannot partition the table because the product doesn't support partitioned tables.
As i mentioned in my earlier postings i tried three options.
1) Alter table table_xxxx modify (case_history clob default empty_clob()) which was never completed.
2) The second approach i have used is
CREATE TABLE TABLE_XXXX_T
(
OBJID NUMBER,
TITLE VARCHAR2(80 BYTE),
S_TITLE VARCHAR2(80 BYTE),
ID_NUMBER VARCHAR2(255 BYTE),
CREATION_TIME DATE,
INTERNAL_CASE NUMBER,
HANGUP_TIME DATE,
ALT_PHONE_NUM VARCHAR2(20 BYTE),
PHONE_NUM VARCHAR2(20 BYTE),
PICKUP_EXT VARCHAR2(8 BYTE),
CASE_HISTORY CLOB DEFAULT EMPTY_CLOB(),
TOPICS_TITLE VARCHAR2(255 BYTE),
YANK_FLAG NUMBER,
SERVER_STATUS VARCHAR2(2 BYTE),
SUPPORT_TYPE VARCHAR2(2 BYTE),
WARRANTY_FLAG VARCHAR2(2 BYTE),
SUPPORT_MSG VARCHAR2(80 BYTE),
ALT_FIRST_NAME VARCHAR2(30 BYTE),
ALT_LAST_NAME VARCHAR2(30 BYTE),
ALT_FAX_NUMBER VARCHAR2(20 BYTE),
ALT_E_MAIL VARCHAR2(80 BYTE),
ALT_SITE_NAME VARCHAR2(80 BYTE),
ALT_ADDRESS VARCHAR2(200 BYTE),
ALT_CITY VARCHAR2(30 BYTE),
ALT_STATE VARCHAR2(30 BYTE),
ALT_ZIPCODE VARCHAR2(20 BYTE),
FCS_CC_NOTIFY NUMBER,
SYMPTOM_CODE VARCHAR2(10 BYTE),
CURE_CODE VARCHAR2(10 BYTE),
SITE_TIME DATE,
ALT_PROD_SERIAL VARCHAR2(30 BYTE),
MSG_WAIT_COUNT NUMBER,
REPLY_WAIT_COUNT NUMBER,
REPLY_STATE NUMBER,
OPER_SYSTEM VARCHAR2(20 BYTE),
CASE_SUP_TYPE VARCHAR2(2 BYTE),
PAYMENT_METHOD VARCHAR2(30 BYTE),
REF_NUMBER VARCHAR2(80 BYTE),
DOA_CHECK_BOX NUMBER,
CUSTOMER_SATIS NUMBER,
CUSTOMER_CODE VARCHAR2(20 BYTE),
SERVICE_ID VARCHAR2(30 BYTE),
ALT_PHONE VARCHAR2(20 BYTE),
FORWARD_CHECK NUMBER,
CCLIST1 VARCHAR2(255 BYTE),
CCLIST2 VARCHAR2(255 BYTE),
KEYWORDS VARCHAR2(255 BYTE),
OWNERSHIP_STMP DATE,
MODIFY_STMP DATE,
DIST NUMBER,
ARCH_IND NUMBER,
IS_SUPERCASE NUMBER,
DEV NUMBER,
X_FAULTTOCSS VARCHAR2(1 BYTE),
X_FAULT_REP_ORIG VARCHAR2(1 BYTE),
X_SPECIAL_NOTE VARCHAR2(80 BYTE),
X_FAULT_LOC_CODE VARCHAR2(2 BYTE),
X_CUST_ADV_CLEAR VARCHAR2(1 BYTE),
X_OST_DURATION NUMBER,
X_CUST_ADV_TIME DATE,
X_RESTORE_TIME DATE,
X_RESP_TIME DATE,
X_REPORT_TIME DATE,
X_STAGE VARCHAR2(30 BYTE),
X_ESC_LEVEL VARCHAR2(2 BYTE),
X_WM_STATUS VARCHAR2(3 BYTE),
X_REFUND_DUE VARCHAR2(1 BYTE),
X_TOTAL_JEOPARDY VARCHAR2(2 BYTE),
X_STAGE_JEOPARDY VARCHAR2(2 BYTE),
X_FAULT_CODE VARCHAR2(2 BYTE),
X_WORKMANAGER_DATA VARCHAR2(255 BYTE),
X_CLEAR_DETAILS VARCHAR2(2 BYTE),
X_FAULT_TYPE VARCHAR2(2 BYTE),
X_CLEAR_DESC VARCHAR2(21 BYTE),
X_FAILURE_ANALYSIS VARCHAR2(3 BYTE),
X_OST_RESPONSE NUMBER,
X_OST_RESTORE NUMBER,
X_OST_CLEAR NUMBER,
X_CUST_CONTACT_PERMISSION NUMBER,
X_NON_LIVE_FLAG NUMBER,
X_CLK_RESTORE NUMBER,
X_CLK_CLEAR NUMBER,
X_CLK_RESPONSE NUMBER,
X_CASE_TOTAL_JEOPARDY NUMBER,
X_CASE_TOTAL_FAILURE NUMBER,
X_CASE_STAGE_JEOPARDY NUMBER,
X_CASE_STAGE_FAILURE NUMBER,
X_CASE_TARGET_TIME DATE,
X_STAGE_TARGET_TIME DATE,
X_CASE_JEOPARDY_TIME DATE,
X_STAGE_JEOPARDY_TIME DATE,
X_UNPARK_TIME DATE,
CASE_SOLN2WORKAROUND NUMBER,
CASE_PREVQ2QUEUE NUMBER,
CASE_CURRQ2QUEUE NUMBER,
CASE_WIP2WIPBIN NUMBER,
CASE_LOGIC2PROG_LOGIC NUMBER,
CASE_OWNER2USER NUMBER,
CASE_STATE2CONDITION NUMBER,
CASE_ORIGINATOR2USER NUMBER,
CASE_EMPL2EMPLOYEE NUMBER,
CALLTYPE2GBST_ELM NUMBER,
RESPPRTY2GBST_ELM NUMBER,
RESPSVRTY2GBST_ELM NUMBER,
CASE_PROD2SITE_PART NUMBER,
CASE_REPORTER2SITE NUMBER,
CASE_REPORTER2CONTACT NUMBER,
ENTITLEMENT2CONTRACT NUMBER,
CASESTS2GBST_ELM NUMBER,
CASE_RIP2RIPBIN NUMBER,
COVRD_PPI2SITE_PART NUMBER,
CASE_DISTR2SITE NUMBER,
CASE2ADDRESS NUMBER,
CASE_NODE2SITE_PART NUMBER,
DE_PRODUCT2SITE_PART NUMBER,
CASE_PRT2PART_INFO NUMBER,
DE_PRT2PART_INFO NUMBER,
ALT_CONTACT2CONTACT NUMBER,
TASK2OPPORTUNITY NUMBER,
CASE2LIFE_CYCLE NUMBER,
CASE_VICTIM2CASE NUMBER,
ENTITLE2CONTR_ITM NUMBER,
X_OWNED_AT2WORKGROUP NUMBER,
X_CUST_HANDLING2WORKGROUP NUMBER,
X_THIRD_CONTACT2CONTACT NUMBER,
X_CASE2OST NUMBER,
X_FLT_FRP2USER_DATA_ELM NUMBER,
X_RECEP_FRP2USER_DATA_ELM NUMBER,
X_MSF_CASE2NFM_TROUBLE_TICKET NUMBER,
X_ECO_CASE_ID VARCHAR2(22 BYTE),
X_CASE2REASONCODE NUMBER,
X_EXT_CASE_ID VARCHAR2(22 BYTE),
X_EXT_SYS_ID VARCHAR2(7 BYTE),
X_PROC_NET_UPDATE NUMBER,
X_LLU_FLAG VARCHAR2(1 BYTE)
)
and then
#!/bin/ksh
sqlplus -S xxx/xxxxxxx <<EPF
ALTER SESSION ENABLE PARALLEL DML;
insert /*+ append */ into table_xxxx_t
select
/*+ parallel(x 20) */
x.objid,x.title ,x.s_title,x.id_number,x.creation_time,x.internal_case,x.hangup_time,
x.alt_phone_num,x.phone_num,x.pickup_ext,to_lob(x.case_history) case_history,x.topics_title,
x.yank_flag,x.server_status,x.support_type,x.warranty_flag,x.support_msg,
x.alt_first_name,x.alt_last_name,x.alt_fax_number,x.alt_e_mail,x.alt_site_name,x.alt_address,x.alt_city,
x.alt_state,x.alt_zipcode,x.fcs_cc_notify,x.symptom_code,x.cure_code,x.site_time,
x.alt_prod_serial,x.msg_wait_count,x.reply_wait_count,x.reply_state,x.oper_system,x.case_sup_type,
x.payment_method,x.ref_number,x.doa_check_box,x.customer_satis,x.customer_code,x.service_id,
x.alt_phone,x.forward_check,x.cclist1,x.cclist2,x.keywords,x. ownership_stmp,x.modify_stmp,x.dist,
x.arch_ind,x.is_supercase,x.dev,x.x_faulttocss,x.x_fault_rep_orig,x.x_special_note,x.x_fault_loc_code,
x.x_cust_adv_clear,x.x_ost_duration,x.x_cust_adv_time,x.x_restore_time,x.x_resp_time,x.x_report_time,
x.x_stage,x.x_esc_level,x.x_wm_status,x.x_refund_due,x.x_total_jeopardy,x.x_stage_jeopardy,
x.x_fault_code,x.x_workmanager_data,x.x_clear_details,x.x_fault_type,x.x_clear_desc,
x.x_failure_analysis,x.x_ost_response,x.x_ost_restore,x.x_ost_clear,x.x_cust_contact_permission,
x.x_non_live_flag,x.x_clk_restore,x.x_clk_clear,x.x_clk_response,x.x_case_total_jeopardy,x.x_case_total_failure,
x.x_case_stage_jeopardy,x.x_case_stage_failure,x.x_case_target_time,x.x_stage_target_time,
x.x_case_jeopardy_time,x.x_stage_jeopardy_time,x.x_unpark_time,x.case_soln2workaround,
x.case_prevq2queue,x.case_currq2queue,x.case_wip2wipbin,x.case_logic2prog_logic,
x.case_owner2user,x.case_state2condition,x.case_originator2user,x.case_empl2employee,
x.calltype2gbst_elm,x.respprty2gbst_elm,x.respsvrty2gbst_elm,x.case_prod2site_part,
x.case_reporter2site,x.case_reporter2contact,x.entitlement2contract,x.casests2gbst_elm,
x.case_rip2ripbin,x.covrd_ppi2site_part,x.case_distr2site,x.case2address,x.case_node2site_part,
x.de_product2site_part,x.case_prt2part_info,x.de_prt2part_info,x.alt_contact2contact,x.task2opportunity,
x.case2life_cycle,x.case_victim2case,x.entitle2contr_itm,x.x_owned_at2workgroup,
x.x_cust_handling2workgroup,x.x_third_contact2contact,x.x_case2ost,x.x_flt_frp2user_data_elm,
x.x_recep_frp2user_data_elm,x.x_msf_case2nfm_trouble_ticket,x.x_eco_case_id ,
x.x_case2reasoncode,x.x_ext_case_id,x.x_ext_sys_id,x.x_proc_net_update,x.x_llu_flag
from table_xxxx x
;
commit;
alter session disable parallel dml ;
exit;
EPF

i am running the above script in background
3) This is the ctas option i am using
i have set the parallel_automatic_tuning=true in the init.ora. though i am explicitly specifying the number of parallel processes
select name,value from v$parameter where name ;
= 'parallel_automatic_tuning'

NAME VALUE

parallel_automatic_tuning TRUE

create table table_xxxx_t
tablespace data06
as
select /*+ parallel(t 20) */
objid,title ,s_title,id_number,creation_time,internal_case,hangup_time,
alt_phone_num,phone_num,pickup_ext,TO_LOB(case_history),topics_title,
yank_flag,server_status,support_type,warranty_flag,support_msg,
alt_first_name,alt_last_name,alt_fax_number,alt_e_mail,alt_site_name,alt_address,
alt_city,alt_state,alt_zipcode, fcs_cc_notify,symptom_code,cure_code,site_time,
alt_prod_serial,msg_wait_count,reply_wait_count,reply_state,oper_system,case_sup_type,
payment_method,ref_number,doa_check_box,customer_satis,customer_code,service_id,
alt_phone,forward_check,cclist1,cclist2,keywords, ownership_stmp,modify_stmp,dist,
arch_ind,is_supercase,dev,x_faulttocss,x_fault_rep_orig,x_special_note,x_fault_loc_code,
x_cust_adv_clear,x_ost_duration,x_cust_adv_time,x_restore_time,x_resp_time,x_report_time,
x_stage,x_esc_level,x_wm_status,x_refund_due,x_total_jeopardy,x_stage_jeopardy,
x_fault_code,x_workmanager_data,x_clear_details,x_fault_type,x_clear_desc,
x_failure_analysis,x_ost_response,x_ost_restore,x_ost_clear,x_cust_contact_permission,
x_non_live_flag,x_clk_restore,x_clk_clear,x_clk_response,x_case_total_jeopardy,x_case_total_failure,
x_case_stage_jeopardy,x_case_stage_failure,x_case_target_time,x_stage_target_time,
x_case_jeopardy_time,x_stage_jeopardy_time,x_unpark_time,case_soln2workaround,
case_prevq2queue,case_currq2queue,case_wip2wipbin,case_logic2prog_logic,
case_owner2user,case_state2condition,case_originator2user,case_empl2employee,
calltype2gbst_elm,respprty2gbst_elm,respsvrty2gbst_elm,case_prod2site_part,
case_reporter2site,case_reporter2contact,entitlement2contract,casests2gbst_elm,
case_rip2ripbin,covrd_ppi2site_part,
case_distr2site,case2address,case_node2site_part,de_product2site_part,
case_prt2part_info,de_prt2part_info,alt_contact2contact,task2opportunity,
case2life_cycle,case_victim2case,entitle2contr_itm,x_owned_at2workgroup,
x_cust_handling2workgroup,x_third_contact2contact,x_case2ost,x_flt_frp2user_data_elm,
x_recep_frp2user_data_elm,x_msf_case2nfm_trouble_ticket,x_eco_case_id ,
x_case2reasoncode,x_ext_case_id,x_ext_sys_id,x_proc_net_update,x_llu_flag
from table_xxxx t
;

some longs have data around 2500bytes and other longs are real longs with data more thant 6000bytes.

other significant waits for insert/*+ append */ parallel which i am testing currently are (thoug they are called idle events)
a) PX Deq Credit: send blkd waiting for 136205.68secs.
b) PX Deq Credit: need buffer 106.99 secs.

please suggest Tom
- Many thanks
ravi

Tom Kyte
May 31, 2006 - 8:54 am UTC

you say "other significant", what are THE significant.

The only thing I can think here is that you have totally insufficient IO to accomplish this in your allocated timeframe.

You might consider using dbms_redefinition to online redefine this if down time it to be avoided.

Do you have 10 cpus (you are running parallel 20), do you have sufficient IO for 20 processes to all concurrently READ and WRITE without stomping eachother.

Perhaps your root cause problem here is "too much parallel".


do you have a REAL grasp of the length of the longs, information such as

"some longs have data around 2500bytes" - is very vague (some = 1 or 10000000?)
"other longs ... with data more than 6000bytes" - even more vague, (other=1 or 100000, more than 6000 implies what? 1m, 1gig???)

LONG TO CLOB

ravs, May 31, 2006 - 11:00 am UTC

Hi Tom,
Thanks for the reply.
a) As i have mentioned in the posting the significant waits i can see on the following
1)PX Deq Credit: send blkd waiting for 136205.68secs
2) PX Deq Credit: need buffer 106.99 secs.

b) i cann't use dbms_redifinition for online rebuilding because, i have to upgrade the product also after i convert from long to clob.

c) i have 24 cpus on my box

d) Sorry Tom, i could not give you real picture about longs length.
i have around 2M records out of which 0.5M records are around 2500bytes and rest 1.5M are above 2500bytes and less than 1MB in any case ( i have seen a sample of around 100,000 records to conclude above).

"20 processes to all concurrently READ and WRITE without stomping eachother" i don't know how to find out.

can you please throw some light on this ?
- Many Thanks
ravi

Tom Kyte
May 31, 2006 - 3:20 pm UTC

a) but what are the PX sessions waiting on, forget those waits, that is waiting for the answer.

see Note 271767.1 and Note 253912.1 on metalink.

What are the *real* waits here.

b) that makes no sense to me at all, explain why not. If you can use THIS technique above, you can CERTAINLY use an online redefine. I'd need to know "why" you cannot.

c) ok

d) these are tiny.



do you have lots of disk for these 20 concurrent processes to read and write - that is what this comes DOWN TO.

DBA_LOBS

Sean, June 18, 2006 - 2:56 pm UTC

Tom,
i want to find the types of lob such as clob or blob.
i looked at dba_lobs, and seems it has no that information.
i know i could desc tab but i would like a dba view shows it.

thanks.

Tom Kyte
June 18, 2006 - 4:06 pm UTC

dba_tab_columns

ops$tkyte@ORA10GR2> select data_type, count(*) from dba_tab_columns group by data_type order by 2;

DATA_TYPE                        COUNT(*)
------------------------------ ----------
KU$_INDEX_T                             1
WRI$_ADV_ABSTRACT_T                     1
KU$_FGA_REL_COL_LIST_T                  1
...
XMLTYPE                                35
ANYDATA                                50
ROWID                                  54
BLOB                                   61
TIMESTAMP(6) WITH TIME ZONE           148
KU$_SCHEMAOBJ_T                       186
TIMESTAMP(6)                          193
CLOB                                  214
LONG                                  236
CHAR                                  556
RAW                                  1447
DATE                                 1969
VARCHAR2                            22504
NUMBER                              25982

169 rows selected.

 

Long to Clob

ravs, June 19, 2006 - 1:31 am UTC

Hi Tom,
I have a table with 135 Columns and one of which is long.
i am trying to convert the long to clob. i have around 2M records in the table. I have created 7 similar tables of original table (i have used clob instead of long in all the 7 tables)
and trying to read from original and convert them to clob using the to_lob. i am running all these 7 conversions in different sqlplus sessions
I have enable trace for one of the conversions and here is the traceoutput.. 
can you suggest imporvements on this.
I am using oracle 9.2.0.4 and sunfire 15k with 60GB RAM and 36CPUS and i am the only person working on it
SQL> desc table_xxx2
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OBJID                                              NUMBER
 TITLE                                              VARCHAR2(80)
 S_TITLE                                            VARCHAR2(80)
 ID_NUMBER                                          VARCHAR2(255)
 CREATION_TIME                                      DATE
 INTERNAL_CASE                                      NUMBER
 HANGUP_TIME                                        DATE
 ALT_PHONE_NUM                                      VARCHAR2(20)
 PHONE_NUM                                          VARCHAR2(20)
 PICKUP_EXT                                         VARCHAR2(8)
 CASE_HISTORY                                       CLOB
 TOPICS_TITLE                                       VARCHAR2(255)
 YANK_FLAG                                          NUMBER
 SERVER_STATUS                                      VARCHAR2(2)
 SUPPORT_TYPE                                       VARCHAR2(2)
 WARRANTY_FLAG                                      VARCHAR2(2)
 SUPPORT_MSG                                        VARCHAR2(80)
 ALT_FIRST_NAME                                     VARCHAR2(30)
 ALT_LAST_NAME                                      VARCHAR2(30)
 ALT_FAX_NUMBER                                     VARCHAR2(20)
 ALT_E_MAIL                                         VARCHAR2(80)
 ALT_SITE_NAME                                      VARCHAR2(80)
 ALT_ADDRESS                                        VARCHAR2(200)
 ALT_CITY                                           VARCHAR2(30)
 ALT_STATE                                          VARCHAR2(30)
 ALT_ZIPCODE                                        VARCHAR2(20)
 FCS_CC_NOTIFY                                      NUMBER
 SYMPTOM_CODE                                       VARCHAR2(10)
 CURE_CODE                                          VARCHAR2(10)
 SITE_TIME                                          DATE
 ALT_PROD_SERIAL                                    VARCHAR2(30)
 MSG_WAIT_COUNT                                     NUMBER
 REPLY_WAIT_COUNT                                   NUMBER
 REPLY_STATE                                        NUMBER
 OPER_SYSTEM                                        VARCHAR2(20)
 CASE_SUP_TYPE                                      VARCHAR2(2)
 PAYMENT_METHOD                                     VARCHAR2(30)
 REF_NUMBER                                         VARCHAR2(80)
 DOA_CHECK_BOX                                      NUMBER
 CUSTOMER_SATIS                                     NUMBER
 CUSTOMER_CODE                                      VARCHAR2(20)
 SERVICE_ID                                         VARCHAR2(30)
 ALT_PHONE                                          VARCHAR2(20)
 FORWARD_CHECK                                      NUMBER
 CCLIST1                                            VARCHAR2(255)
 CCLIST2                                            VARCHAR2(255)
 KEYWORDS                                           VARCHAR2(255)
 OWNERSHIP_STMP                                     DATE
 MODIFY_STMP                                        DATE
 DIST                                               NUMBER
 ARCH_IND                                           NUMBER
 IS_SUPERCASE                                       NUMBER
 DEV                                                NUMBER
 X_FAULTTOCSS                                       VARCHAR2(1)
 X_FAULT_REP_ORIG                                   VARCHAR2(1)
 X_SPECIAL_NOTE                                     VARCHAR2(80)
 X_FAULT_LOC_CODE                                   VARCHAR2(2)
 X_CUST_ADV_CLEAR                                   VARCHAR2(1)
 X_OST_DURATION                                     NUMBER
 X_CUST_ADV_TIME                                    DATE
 X_RESTORE_TIME                                     DATE
 X_RESP_TIME                                        DATE
 X_REPORT_TIME                                      DATE
 X_STAGE                                            VARCHAR2(30)
 X_ESC_LEVEL                                        VARCHAR2(2)
 X_WM_STATUS                                        VARCHAR2(3)
 X_REFUND_DUE                                       VARCHAR2(1)
 X_TOTAL_JEOPARDY                                   VARCHAR2(2)
 X_STAGE_JEOPARDY                                   VARCHAR2(2)
 X_FAULT_CODE                                       VARCHAR2(2)
 X_WORKMANAGER_DATA                                 VARCHAR2(255)
 X_CLEAR_DETAILS                                    VARCHAR2(2)
 X_FAULT_TYPE                                       VARCHAR2(2)
 X_CLEAR_DESC                                       VARCHAR2(21)
 X_FAILURE_ANALYSIS                                 VARCHAR2(3)
 X_OST_RESPONSE                                     NUMBER
 X_OST_RESTORE                                      NUMBER
 X_OST_CLEAR                                        NUMBER
 X_CUST_CONTACT_PERMISSION                          NUMBER
 X_NON_LIVE_FLAG                                    NUMBER
 X_CLK_RESTORE                                      NUMBER
 X_CLK_CLEAR                                        NUMBER
 X_CLK_RESPONSE                                     NUMBER
 X_CASE_TOTAL_JEOPARDY                              NUMBER
 X_CASE_TOTAL_FAILURE                               NUMBER
 X_CASE_STAGE_JEOPARDY                              NUMBER
 X_CASE_STAGE_FAILURE                               NUMBER
 X_CASE_TARGET_TIME                                 DATE
 X_STAGE_TARGET_TIME                                DATE
 X_CASE_JEOPARDY_TIME                               DATE
 X_STAGE_JEOPARDY_TIME                              DATE
 X_UNPARK_TIME                                      DATE
 CASE_SOLN2WORKAROUND                               NUMBER
 CASE_PREVQ2QUEUE                                   NUMBER
 CASE_CURRQ2QUEUE                                   NUMBER
 CASE_WIP2WIPBIN                                    NUMBER
 CASE_LOGIC2PROG_LOGIC                              NUMBER
 CASE_OWNER2USER                                    NUMBER
 CASE_STATE2CONDITION                               NUMBER
 CASE_ORIGINATOR2USER                               NUMBER
 CASE_EMPL2EMPLOYEE                                 NUMBER
 CALLTYPE2GBST_ELM                                  NUMBER
 RESPPRTY2GBST_ELM                                  NUMBER
 RESPSVRTY2GBST_ELM                                 NUMBER
 CASE_PROD2SITE_PART                                NUMBER
 CASE_REPORTER2SITE                                 NUMBER
 CASE_REPORTER2CONTACT                              NUMBER
 ENTITLEMENT2CONTRACT                               NUMBER
 CASESTS2GBST_ELM                                   NUMBER
 CASE_RIP2RIPBIN                                    NUMBER
 COVRD_PPI2SITE_PART                                NUMBER
 CASE_DISTR2SITE                                    NUMBER
 CASE2ADDRESS                                       NUMBER
 CASE_NODE2SITE_PART                                NUMBER
 DE_PRODUCT2SITE_PART                               NUMBER
 CASE_PRT2PART_INFO                                 NUMBER
 DE_PRT2PART_INFO                                   NUMBER
 ALT_CONTACT2CONTACT                                NUMBER
 TASK2OPPORTUNITY                                   NUMBER
 CASE2LIFE_CYCLE                                    NUMBER
 CASE_VICTIM2CASE                                   NUMBER
 ENTITLE2CONTR_ITM                                  NUMBER
 X_OWNED_AT2WORKGROUP                               NUMBER
 X_CUST_HANDLING2WORKGROUP                          NUMBER
 X_THIRD_CONTACT2CONTACT                            NUMBER
 X_CASE2OST                                         NUMBER
 X_FLT_FRP2USER_DATA_ELM                            NUMBER
 X_RECEP_FRP2USER_DATA_ELM                          NUMBER
 X_MSF_CASE2NFM_TROUBLE_TICKET                      NUMBER
 X_ECO_CASE_ID                                      VARCHAR2(22)
 X_CASE2REASONCODE                                  NUMBER
 X_EXT_CASE_ID                                      VARCHAR2(22)
 X_EXT_SYS_ID                                       VARCHAR2(7)
 X_PROC_NET_UPDATE                                  NUMBER
 X_LLU_FLAG                                         VARCHAR2(1)

INSERT /*+ append */ INTO table_XXX2
SELECT
  objid
  ,title
  ,s_title
  ,id_number
  ,creation_time
  ,internal_case
  ,hangup_time
  ,alt_phone_num
  ,phone_num
  ,pickup_ext
  ,to_lob(case_history) case_history
  ,topics_title
  ,yank_flag
  ,server_status
  ,support_type
  ,warranty_flag
  ,support_msg
  ,alt_first_name
  ,alt_last_name
  ,alt_fax_number
  ,alt_e_mail
  ,alt_site_name
  ,alt_address
  ,alt_city
  ,alt_state
  ,alt_zipcode
  ,fcs_cc_notify
  ,symptom_code
  ,cure_code
  ,site_time
  ,alt_prod_serial
  ,msg_wait_count
  ,reply_wait_count
  ,reply_state
  ,oper_system
  ,case_sup_type
  ,payment_method
  ,ref_number
  ,doa_check_box
  ,customer_satis
  ,customer_code
  ,service_id
  ,alt_phone
  ,forward_check
  ,cclist1
  ,cclist2
  ,keywords
  ,ownership_stmp
  ,modify_stmp
  ,dist
  ,arch_ind
  ,is_supercase
  ,dev
  ,x_faulttocss
  ,x_fault_rep_orig
  ,x_special_note
  ,x_fault_loc_code
  ,x_cust_adv_clear
  ,x_ost_duration
  ,x_cust_adv_time
  ,x_restore_time
  ,x_resp_time
  ,x_report_time
  ,x_stage
  ,x_esc_level
  ,x_wm_status
  ,x_refund_due
  ,x_total_jeopardy
  ,x_stage_jeopardy
  ,x_fault_code
  ,x_workmanager_data
  ,x_clear_details
  ,x_fault_type
  ,x_clear_desc
  ,x_failure_analysis
  ,x_ost_response
  ,x_ost_restore
  ,x_ost_clear
  ,x_cust_contact_permission
  ,x_non_live_flag
  ,x_clk_restore
  ,x_clk_clear
  ,x_clk_response
  ,x_case_total_jeopardy
  ,x_case_total_failure
  ,x_case_stage_jeopardy
  ,x_case_stage_failure
  ,x_case_target_time
  ,x_stage_target_time
  ,x_case_jeopardy_time
  ,x_stage_jeopardy_time
  ,x_unpark_time
  ,case_soln2workaround
  ,case_prevq2queue
  ,case_currq2queue
  ,case_wip2wipbin
  ,case_logic2prog_logic
  ,case_owner2user
  ,case_state2condition
  ,case_originator2user
  ,case_empl2employee
  ,calltype2gbst_elm
  ,respprty2gbst_elm
  ,respsvrty2gbst_elm
  ,case_prod2site_part
  ,case_reporter2site
  ,case_reporter2contact
  ,entitlement2contract
  ,casests2gbst_elm
  ,case_rip2ripbin
  ,covrd_ppi2site_part
  ,case_distr2site
  ,case2address
  ,case_node2site_part
  ,de_product2site_part
  ,case_prt2part_info
  ,de_prt2part_info
  ,alt_contact2contact
  ,task2opportunity
  ,case2life_cycle
  ,case_victim2case
  ,entitle2contr_itm
  ,x_owned_at2workgroup
  ,x_cust_handling2workgroup
  ,x_third_contact2contact
  ,x_case2ost
  ,x_flt_frp2user_data_elm
  ,x_recep_frp2user_data_elm
  ,x_msf_case2nfm_trouble_ticket
  ,x_eco_case_id
  ,x_case2reasoncode
  ,x_ext_case_id
  ,x_ext_sys_id
  ,x_proc_net_update
  ,x_llu_flag
from table_ORIG
where rowid in (select id from table_XXXX_obj where objid between 271035866 and 281035866)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1 145373.06  142671.84    1686304    2542654    1702034      295150
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1 145373.06  142671.84    1686304    2542654    1702034      295150

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 24  
********************************************************************************

there is coulmn objid on the original table and i am using the objid to divide the table into 7 each containing around 300,000 records.

My intention here is, i will first convert the longs to clobs using the above statement and the drop original table and create a range partitioned table on objid and use alter table exchange parition.

please look into the tkporf out put suggest me for improvements. This single inserte statement took around 40hrs to convert from long to clob.

select id from table_XXXX_obj where objid between 271035866 and 281035866 -- This statement returns 295150
records..
i tried all other options suggested by you like CTAS with parallel,Alter table modify,insert into at one shot with parallel. But they were never completed(waited for almost 3 days and killed it).
I have max long size of 56K and chaing count of 418058
please help me
- Thanks in advance 

Tom Kyte
June 19, 2006 - 5:50 pm UTC

did you use a 8.x or before tkprof on a 9i or above trace file here?

I cannot believe you would have 40plus hours of patience for so few rows.


where is the plan and what statistics do you see on the plan itself?

Long to Clob

ravs, June 19, 2006 - 11:56 pm UTC

Here is the complete tracefile output Tom.

TKPROF: Release 9.2.0.4.0 - Production on Mon Jun 19 08:38:27 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: xxxxx_ora_7891.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

INSERT /*+ append */ INTO table_xxx2
SELECT
objid
,title
,s_title
,id_number
,creation_time
,internal_case
,hangup_time
,alt_phone_num
,phone_num
,pickup_ext
,to_lob(case_history) case_history
,topics_title
,yank_flag
,server_status
,support_type
,warranty_flag
,support_msg
,alt_first_name
,alt_last_name
,alt_fax_number
,alt_e_mail
,alt_site_name
,alt_address
,alt_city
,alt_state
,alt_zipcode
,fcs_cc_notify
,symptom_code
,cure_code
,site_time
,alt_prod_serial
,msg_wait_count
,reply_wait_count
,reply_state
,oper_system
,case_sup_type
,payment_method
,ref_number
,doa_check_box
,customer_satis
,customer_code
,service_id
,alt_phone
,forward_check
,cclist1
,cclist2
,keywords
,ownership_stmp
,modify_stmp
,dist
,arch_ind
,is_supercase
,dev
,x_faulttocss
,x_fault_rep_orig
,x_special_note
,x_fault_loc_code
,x_cust_adv_clear
,x_ost_duration
,x_cust_adv_time
,x_restore_time
,x_resp_time
,x_report_time
,x_stage
,x_esc_level
,x_wm_status
,x_refund_due
,x_total_jeopardy
,x_stage_jeopardy
,x_fault_code
,x_workmanager_data
,x_clear_details
,x_fault_type
,x_clear_desc
,x_failure_analysis
,x_ost_response
,x_ost_restore
,x_ost_clear
,x_cust_contact_permission
,x_non_live_flag
,x_clk_restore
,x_clk_clear
,x_clk_response
,x_case_total_jeopardy
,x_case_total_failure
,x_case_stage_jeopardy
,x_case_stage_failure
,x_case_target_time
,x_stage_target_time
,x_case_jeopardy_time
,x_stage_jeopardy_time
,x_unpark_time
,case_soln2workaround
,case_prevq2queue
,case_currq2queue
,case_wip2wipbin
,case_logic2prog_logic
,case_owner2user
,case_state2condition
,case_originator2user
,case_empl2employee
,calltype2gbst_elm
,respprty2gbst_elm
,respsvrty2gbst_elm
,case_prod2site_part
,case_reporter2site
,case_reporter2contact
,entitlement2contract
,casests2gbst_elm
,case_rip2ripbin
,covrd_ppi2site_part
,case_distr2site
,case2address
,case_node2site_part
,de_product2site_part
,case_prt2part_info
,de_prt2part_info
,alt_contact2contact
,task2opportunity
,case2life_cycle
,case_victim2case
,entitle2contr_itm
,x_owned_at2workgroup
,x_cust_handling2workgroup
,x_third_contact2contact
,x_case2ost
,x_flt_frp2user_data_elm
,x_recep_frp2user_data_elm
,x_msf_case2nfm_trouble_ticket
,x_eco_case_id
,x_case2reasoncode
,x_ext_case_id
,x_ext_sys_id
,x_proc_net_update
,x_llu_flag
from table_ORIG
where rowid in (select id from table_xxx_obj where objid between 271035866 and 281035866)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 145373.06 142671.84 1686304 2542654 1702034 295150
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 145373.06 142671.84 1686304 2542654 1702034 295150

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 24 (xxx)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: CHOOSE
0 LOAD AS SELECT
0 HASH JOIN
0 SORT (UNIQUE)
0 TABLE ACCESS (FULL) OF 'TABLE_xxx_OBJ'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TABLE_ORIG'

********************************************************************************

commit


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 2 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 2 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24 (xxx)



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 145373.06 142671.84 1686304 2542654 1702036 295150
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 145373.06 142671.84 1686304 2542654 1702036 295150

Misses in library cache during parse: 1
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 12 0.01 0.01 0 0 0 0
Execute 12 0.02 0.01 0 16 18 9
Fetch 23 0.00 0.00 0 44 0 21
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 47 0.03 0.03 0 60 18 30

Misses in library cache during parse: 5

2 user SQL statements in session.
12 internal SQL statements in session.
14 SQL statements in session.
1 statement EXPLAINed in this session.
********************************************************************************
Trace file: space1_ora_7891.trc
Trace file compatibility: 9.00.01
Sort options: default

1 session in tracefile.
2 user SQL statements in trace file.
12 internal SQL statements in trace file.
14 SQL statements in trace file.
7 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
xxx.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
296 lines in trace file.

please note that there are no indexes on either tables. only statistics are available.
i have first upgrade the db from 8i to 9.2.0.4 and trying to convert long to clob
- Thanks in advance


Tom Kyte
June 20, 2006 - 9:43 am UTC

sigh, you didn't answer my question at all.


and the trace is virtually useless. since the cursors were not closed, the rows columns never got filled in. and statistics_level wasn't set...

I was hoping for a trace that would show this (note: we don't need a really long insert two times on the same page...)


insert /*+ append */ into t select * from scott.emp


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.07 0 4 7 14
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.07 0 4 7 14

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 66

Rows Row Source Operation
------- ---------------------------------------------------
1 LOAD AS SELECT (cr=4 r=0 w=1 time=60205 us)
14 TABLE ACCESS FULL EMP (cr=3 r=0 w=0 time=262 us)


Long to Clob

ravs, June 20, 2006 - 11:29 pm UTC

Hi Tom,
sorry for not giving enough info.
i am using 9i tkprof and not 8.x tkprof.
This is the content of the tracefile

Just to remind you i have upgraded this from 8.x and 9.2.0.4 and trying to convert long to clob

INSERT /*+ append */ INTO table_xxx2
SELECT objid,title,s_title,id_number,creation_time ,internal_case
,hangup_time,alt_phone_num,phone_num,pickup_ext,to_lob(case_history) case_history ,topics_title ,yank_flag,x_ext_case_id,
..... (these dots are rest of columns in the table)
......
,x_ext_sys_id
,x_proc_net_update
,x_llu_flag
from table_orig
where rowid in (select id from table_xxx_obj where objid between 271035866 and 281035866) (literal are used to get data exactly beteen the objid's)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 145373.06 142671.84 1686304 2542654 1702034 295150
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 145373.06 142671.84 1686304 2542654 1702034 295150

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 24 (xxx)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: CHOOSE
0 LOAD AS SELECT
0 HASH JOIN
0 SORT (UNIQUE)
0 TABLE ACCESS (FULL) OF 'TABLE_xxx_OBJ'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TABLE_orig'

- Thanks in advance


Tom Kyte
June 21, 2006 - 4:33 pm UTC

do you see how you do not have

a) rows....
b) (cr= pr= pw=....)



do this

a) login to sqlplus
b) make sure statistics_level = all
c) run your insert
d) LOG OUT <<<--- closes the cursor, gets the rows out there, the (cr= ...) out there.

Ignore

ravs, June 20, 2006 - 11:37 pm UTC

Tom,
please ignore the immediately above posting.
It seems something wrong in the plan. i don't see any value in the rows columns..(some thing really wrong.. what am i doing?)

Exteremely sorry Tom
- Regards
ravi

long to clob metalink note

ravs, June 22, 2006 - 1:00 am UTC

Tom,
i was going thru this metalink note for converting from long to clob. This uses oci8.
Note:69626.1.

Do you think this will be faster than alter,ctas,insert /*+ append */. i am planning to try this. Before that i would like to take your valuable opinion.
- Thanks
ravi

Tom Kyte
June 22, 2006 - 12:14 pm UTC

ctas, insert /append/, alter - should all be better/faster/cheaper/easier than writing code that takes data OUT of the database and puts data back INTO the database.

For Ravs of India

Mathew Butler, June 22, 2006 - 9:10 am UTC

I'm interested in the resolution of this one:

The tkprof you provide looks peculiar in that during execution ALOT of time is being spent in the CPU. The cause of this could be a number of things. It should be possible to work out the cause is you do as Tom requests:


a) login to sqlplus
b) make sure the paramter statistics_level = all is set
c) run your insert with tracing on
d) LOG OUT <<<--- closes the cursor and additional writes additional information to the trace that will come out in the tkprof report.


If you do the above and post the results it may be possible to highlight the cause of your problem.


sorry i could not post it

ravs, August 24, 2006 - 12:55 pm UTC

Hi,
I tried collecting the statistics, but the cursor never closes, as it takes a lot of time to execute the insert statement may be more than 52 hours.
I can get the access to the system only in the weekends.. so now it may be difficult to post the reuslts..
sorry for my later reply.
if you have done this long to clob of 40GB size table where each long is of 3M, that would be very helpful
Thanks in advance

-Thanks
ravs

Tom Kyte
August 27, 2006 - 8:02 pm UTC

40gb should not take 52 hours.

indexes?
is checkpoint not complete your big wait?
are you appending?
can you use nologging?
what are your big "waits"?

Long to Clob conversion

ravs, August 30, 2006 - 6:14 pm UTC

Hi Tom,
First of all i should thank for all the efforts you are doing for oracle community. I have done long to clob not with CTAS or Inser append. It may be crude but this is what i have done exactly. I have inserted all the columns except the long (to be converted into clob) into new table (new table is of same structure of the old table except the long will be clob ). and i used your longsubstr to read the piece wise long and write to the clob column using dbms_lob.write. it got completed in 4 hours.
i will post the code soon.
once again thanks tom for all your suggestions and help during this tough time. Now my application is in production without issues.
Tom, i will go back look into CTAS and insert append options also as this is not end of story.

Thanks
ravs

converting VARCHAR to BLOB

A reader, May 01, 2007 - 11:55 am UTC

Dear Tom,

I'd like to know if there is any posibility to convert varchar2(4000) to BLOB.
I know that works from varchar2(4000) to CLOB but why doesn't work with BLOBs?

Many thanks in advance!
Tom Kyte
May 01, 2007 - 12:03 pm UTC

sure you can, but to convert a varchar2 to a blob means "what"

do you just want to take the existing data and have it be considered RAW?

utl_raw.cast_to_raw( varchar2_column )

would do that. Else, if the varchar2 contains HEX, then the normal implicit conversion from RAW->VARCHAR2->RAW->VARCHAR2 can take place. A raw, stored in varchar2 would be in HEX, a varchar2 converted to raw would take that hex code and use that to form the bytes.

NUMBER DATATYPE

Deepak, May 03, 2007 - 2:21 am UTC

Hi Tom,

Have a very simple question.

I created a table like the following.

create table for_tom(
numbercol number);

Then I inserted a value like this.

insert into for_tom values (12345.6789);

I selected from the table using

select * from for_tom;

12345.6789


Just want to know, when we define only NUMBER what is the default value for precission and scale (p,s)?

Looking forward to your help.




Tom Kyte
May 03, 2007 - 5:36 pm UTC

the default is 38 digits of precision and any scale (within the range of the NUMBER type) you want.

VARCHAR2 to NCLOB/CLOB

Kraig, July 31, 2007 - 9:57 am UTC

hi Tom,

10gR2

Say I have EMPLOYEES table with column SKILLS as varchar2(4000),
I need to convert SKILLS to NCLOB, is this how you'd do it ?

alter table EMPLOYEES add skills_clob nclob ;
update EMPLOYEES set skills_clob = skills ;
commit;
alter table EMPLOYEES drop column skills;
alter table EMPLOYEES rename column skills_clob to skills;

thanks

Tom Kyte
August 02, 2007 - 9:52 am UTC

if i was permitted downtime, that might be the approach - sure

LONG columns

Bob, September 19, 2007 - 9:33 am UTC

Hi Tom,

I have a long column in a table (with a length of 31926 characters). I need to concatenate it to a VARCHAR2 column.

e.g.
SPOOL rpt.lis
select a ||','|| -- a is VARCHAR2
b ||','|| -- b is VARCHAR2
x ||','|| -- c is VARCHAR2
y -- y is LONG
from z;
SPOOL OFF

How do I do this? Thanks!


Tom Kyte
September 19, 2007 - 1:09 pm UTC

you cannot.

the maximum result of a string concatenation is 4,000 bytes.

ops$tkyte%ORA10GR2> select rpad( '*', 4000, '*' ) || rpad( '+', 4000,'+' ) x from dual;
select rpad( '*', 4000, '*' ) || rpad( '+', 4000,'+' ) x from dual
                                                              *
ERROR at line 1:
ORA-01489: result of string concatenation is too long


ops$tkyte%ORA10GR2> !oerr ora 1489
01489, 00000, "result of string concatenation is too long"
// *Cause: String concatenation result is more than the maximum size.
// *Action: Make sure that the result is less than the maximum size.



you will need to programmatically process this data - using utl_file perhaps (as long as the long contains some newlines or the total linesize is less than 32k) or some other 3gl.

Concatenation of VARCHAR2 and LONG

Bob, September 19, 2007 - 3:54 pm UTC

Hi Tom,

Thanks for your quick reply. Unfortunately I can't use UTL_FILE (although we can use it in development), I can't change the initialization paramenter in production (because it isn't dynamic). We have a 24/7 system that can't be taken down (according to our DBA Team Leader). We don't have Pro*C installed and it will blow the buffer in PL/SQL. Could I do this by writing a Java program or within a UNIX shell script? Thanks so much!
Tom Kyte
September 20, 2007 - 9:32 pm UTC

you do not need to change anything for utl_file to work.

utl_file_dir is deprecated

just create a directory

create directory foo as '/tmp/whatever'
/


and use 'FOO' as the directory to utl_file, it'll work fine without a restart.


Longs and UTL_FILE

Bob, September 20, 2007 - 8:02 pm UTC

OK - realized that on Oracle 9.2 you can UTL_FILE and specify a directory. So you don't need to bounce the database (as you have had to do in previous versions). We are still limited to 32767 characters in UTL_FILE.PUT_LINE ...BUT we can get round it with DBMS_SQL.DEFINE_COLUMN_LONG and write out a LONG in batches of 32767 characters. Which brings me to my next question -

If I build up a string > 32767 characters in PL/SQL

e.g. using PL/SQL variables and concatenating them - how can I update them into a LONG column. Surely it would error out?
Tom Kyte
September 24, 2007 - 7:54 am UTC

only if the string you write out contains a newline, utl_file will not permit a continous string of characters without a newline, it is just what it is.

Approach for Coverting Very big table from LONG to BLOB column

Ramu, March 12, 2008 - 7:11 am UTC

Tom,

We have an requirement to convert LONG column into BLOB . Our table is very big around 280 GB . our data base is Oracle 9.2.0.8.0 on AIX
Searching here I found there is two methods
Like Create table using to_lob then alter table
or
Alter table x modify

Please give us the suggestion which is the best practice.

Also I would like to know when we create a table does it occupy the same space respect to LONG column.

And When we use alter table modify what exactly oracle does internally, what are all the space requirements we have to consider.

Please help us.


Thanks
Ram.

Tom Kyte
March 12, 2008 - 5:34 pm UTC

... LONG column into BLOB ...

why into a blob? one would think CLOB

are you sure you have that right

Ramu, March 12, 2008 - 10:41 pm UTC

The basic idea of convention is to implement the table partition on this table.

In LONG column currently we are storing documents of different formats.

Please give us your suggestion how to implement it.

Tom Kyte
March 13, 2008 - 8:33 am UTC

so, why are you using a blob? That was my question - please address that, why convert a long to a blob

Ramu, March 17, 2008 - 6:09 am UTC

Converting into BLOB is our architecture team decision because blob store less space compared to CLOB and data contains images also

I tried the experiment with 100000 records on my test environment using both methods. The process didnot take much time but observed tablespace size was grown very high. And there is lot of space difference comparison between dba_segments for this segment and trablespace occupied space (collected from dba_free_space ) I created a new tablespace and it has only one table.
After observing both the methods are looks ok for me for this sample experiment but donot know for the whole process because I cannot test whole set up on my test environment .. But I need your suggestion on this.
Also I appreciate if you can give me some strong points why CLOB is better than BLOB I will convince my team to use CLOB.
One more question why this took more space on tablespace Is there any way we can reduce /compress when it actually stores.


Tom Kyte
March 24, 2008 - 8:48 am UTC

... Converting into BLOB is our architecture team decision because blob store
less space compared to CLOB and data contains images also
...

and a blob corrupts text data - so now what?

Meaning, you are missing character set conversion, meaning, using a blob to store text is as wrong as using a clob to store binary data.

pctversion

ali, September 01, 2009 - 3:05 pm UTC

Dear Tom

i am reading your book : expert oracle database architecture

i need some clarification about lob segments


Q 1 : you said that lob segments will not generate undo .. rather new chunk will be allocated while doing DML operation against them .. and old chunk will be available for read consistancy/rollback purpose

after commiting , old chunk will be still available for in case of recovery purpose ?

Q 2 : could you plase explain about pctversion clause ?
i understaand that for read consistancy ... but what the value for pctversion indicates .. ??


i keep learning from your site lot ..
Thanks tom for your valuable service ..

Thanks
Ali





Tom Kyte
September 01, 2009 - 5:59 pm UTC

q1) not recovery, that is the job of redo - redo is about recovery.

after committing, it is only needed for read consistency issues - to avoid ora-1555 type errors. and for flashback query.

q2) pctversion is exactly "what percentage of the existing allocated space should be used to stored before images"

if you use pctversion 10%
and if you have Ngb of space allocated for the lob

then 0.1 * Ngb of space in the lob segment will be used for before image data - we'll try to keep that much data around for old versions.

As for "BLOBs corrupt text data"

A reader, September 05, 2009 - 11:54 pm UTC

Could you please provide an example showing those issues ...?
Tom Kyte
September 06, 2009 - 9:53 am UTC

don't need one.

my character set is utf8

I put a text document into the database, it is stored in a blob, it is stored "as is"

your character set is us7ascii
you retrieve my text document. Do you have a document you can actually read?


for the reason you use varchar2 and not raw to store strings

you use clob and not blob to store big strings.

lob conversion

A reader, May 17, 2010 - 8:09 pm UTC

Tom:

How do you fix this to do nclob to blob conversion in 9i.

SQL> create table test(id number, description nclob, description1 blob);

SQL>  insert into test(id,description) values (1,'Hi');

SQL> commit;

SQL9i>  create or replace procedure nclob_blob_proc is
  2   v_nclob nClob;
  3   v_blob Blob;
  4   v_in Pls_Integer := 1;
  5   v_out Pls_Integer := 1;
  6   v_lang Pls_Integer := 0;
  7   v_warning Pls_Integer := 0;
  8   v_id number(10);
  9   begin
 10   for num in ( select id,description from test)
 11   loop
 12   v_id:=num.id ;
 13   if num.description is null then
 14     v_blob:=null;
 15   else v_nclob:=num.description;
 16     v_in:=1;
 17     v_out:=1;
 18    dbms_lob.createtemporary(v_blob,TRUE);
 19   DBMS_LOB.convertToBlob(v_blob,v_nclob,DBMS_lob.getlength(v_nclob),
 20   v_in,v_out,DBMS_LOB.default_csid,v_lang,v_warning);
 21   end if;
 22   update test set description1=v_blob where id=v_id;
 23   end loop;
 24   commit;
 25   end;
 26  /

Warning: Procedure created with compilation errors.

SQL9i> show errors
Errors for PROCEDURE NCLOB_BLOB_PROC:

LINE/COL ERROR
-------- -----------------------------------------------------------------

19/2     PL/SQL: Statement ignored
19/11    PLS-00302: component 'CONVERTTOBLOB' must be declared


      

Question about rebuilding indexes after long to clob conversion

Nick, February 17, 2011 - 6:33 am UTC

Version 9.2.0.7

In your reply of June 11, 2002 on this page you state that you have to rebuild the indexes on a table after converting a long column to a clob. Does this include system generated indexes on other clob fields in the table?
Tom Kyte
February 17, 2011 - 11:24 am UTC

ops$tkyte%ORA11GR2> create table t ( x int, y date, z varchar2(20), a long, b clob );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( 1, sysdate, 'hello world', 'aaaaaaaaaaa', 'bbbbbb' );

1 row created.

ops$tkyte%ORA11GR2> create index t_idx1 on t(x,y,z);

Index created.

ops$tkyte%ORA11GR2> alter table t modify (a clob);

Table altered.

ops$tkyte%ORA11GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
SYS_IL0000081949C00005$$       VALID
T_IDX1                         UNUSABLE
SYS_IL0000081949C00004$$       VALID

ops$tkyte%ORA11GR2> alter index t_idx1 rebuild;

Index altered.

ops$tkyte%ORA11GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
SYS_IL0000081949C00005$$       VALID
T_IDX1                         VALID
SYS_IL0000081949C00004$$       VALID



just on the table - only indexes that point to the TABLE need be rebuilt. The lob indexes point to the LOBs and they are not moving.

Convert LONG to CLOB column .No 32K limit

Chandra Sing, April 19, 2011 - 12:54 pm UTC

I have modified Tom's LONG_LENGTH fuunction slightly to select/search long column even from the remote table.

create or replace
function long_length_modified( p_query in varchar2,
p_owner in varchar2,
p_owner_value in varchar2,
p_name in varchar2,
p_value in varchar2 )
return clob
as
l_cursor integer default dbms_sql.open_cursor;
l_n number;
l_long_val clob;
l_long_piece clob;
l_long_len number;
l_buflen number := 32760;
l_curpos number := 0;
l_return_value number;
begin
dbms_sql.parse( l_cursor, p_query, dbms_sql.native );

dbms_sql.bind_variable( l_cursor,p_owner, p_owner_value );
dbms_sql.bind_variable( l_cursor,p_name, p_value );

dbms_sql.define_column_long(l_cursor, 1);
l_n := dbms_sql.execute(l_cursor);

if (dbms_sql.fetch_rows(l_cursor)>0)
then
loop
dbms_sql.column_value_long(l_cursor, 1, l_buflen, l_curpos ,
l_long_val, l_long_len );
l_curpos := l_curpos + l_long_len;
l_return_value := nvl(l_return_value,0) + l_long_len;

exit when l_long_len = 0;

l_long_piece := l_long_piece||l_long_val; -- added

end loop;
end if;
dbms_sql.close_cursor(l_cursor); -- added

return l_long_piece;
exception
when others then
if dbms_sql.is_open(l_cursor) then
dbms_sql.close_cursor(l_cursor);
end if;
raise;
end long_length_modified;

To select the remote long table :
select owner,view_name, text_length,
long_length_modified( 'select text
from all_views
where owner=:owner and view_name = :x',
':owner',owner,':x', view_name ) ll
from all_views

To search remote long column :
select * from (
select owner,view_name, text_length,
long_length_modified( 'select text from all_views@deblink where owner=:owner and view_name = :x',
':owner',owner,':x', view_name ) ll
from all_views@dblink
)
where ll like '%test%'

This function can be used to replicate remote table with long column:
create table long_t as
select owner,view_name, text_length,
long_length_modified( 'select text from all_views@dblink where owner=:owner and view_name = :x',
':owner',owner,':x', view_name ) ll
from all_views@dblink



Hema, September 28, 2011 - 2:15 am UTC

Hi Tom
I have column with datatype CLOB in oracle 8i. When I update content it is showing an error like string literal too long.

Can you please suggest me what I need to do?
Tom Kyte
September 28, 2011 - 5:40 am UTC

use bind variables. string literals in SQL are limited to 4000 bytes and 32k in plsql.


but you don't want to use literals, you must use binds for something like this.