Skip to Main Content
  • Questions
  • how to select and insert a long type column

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Chitao.

Asked: August 09, 2000 - 4:46 pm UTC

Last updated: February 19, 2013 - 9:48 am UTC

Version: 8.1.6

Viewed 100K+ times! This question is

You Asked

Tom

You answered how to select or insert a long type column in your faq.

but what I need is select and insert a long type column in one statement.


create table tb2(.., xlong long);
PROCEDURE copyLong IS
p_query varchar2(256);
l_cursor integer default dbms_sql.open_cursor;
l_n number;
l_long_val varchar2(250);
l_long_len number;
l_buflen number := 250;
l_curpos number := 0;
myvar long default '';
BEGIN
p_query := 'insert into tb2(xlong ,cl1,cl2,cl3) select xlong ,cl1,cl2,cl3 from tb2 where ruleid = 2';
dbms_sql.parse(l_cursor,p_query,dbms_sql.native);
dbms_sql.define_column_long(l_cursor,1);
dbms_sql.define_column_long(l_cursor,5);
l_n := dbms_sql.execute(l_cursor);
dbms_sql.close_cursor(l_cursor);
exception
when others then
if dbms_sql.is_open(l_cursor) then
dbms_sql.close_cursor(l_cursor);
end if;

raise;

end copyLong;

what's wrong with this procedure? any suggestiON?


and Tom said...

A long / long raw cannot be used in an "insert into" or "create table as select" statement (among a host of other limitations).

A Long / long raw that is greater then 32k cannot be inserted into another table via plsql. You must use C or VB or Java or someother language for that. The SQLPLUS copy command works as well.

In Oracle8i, release 8.1.6 which you have, the correct datatype to use is a CLOB. Longs and Long Raws are depricated types. A clob / blob has none of these limitations. All of your longs/long raws can be converted into clobs/blobs using the TO_LOB() builtin statement.

Rating

  (19 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Need to move Long data into BLOB column

Shefali, April 03, 2002 - 2:02 pm UTC

Hi Tom,

The TO_LOB() function works fine , for moving LONG data to table with CLOB datatype , but fails when moving LONG data into table with BLOB datatype.

E.g.
table t1 :
Name Type
----- -----
C1 CLOB

table t2 :
Name Type
----- -----
C1 LONG

table t3 :
Name Type
----- -----
C1 BLOB

insert into t1 select TO_LOB(c1) from t2; -- Works fine
insert into t3 select TO_LOB(c1) from t2; -- Gives error
ERROR at line 1:
ORA-00932: inconsistent datatypes

Why is it so, How do I insert LONG Data into table with BLOB datatype?

Thanks,
Shefali

Tom Kyte
April 03, 2002 - 4:11 pm UTC

LONGS goto CLOBS
LONG RAWS goto BLOBS

Makes no sense to move a long to a blob, we don't do that.

can you give us an example...

A reader, June 27, 2002 - 8:55 am UTC

as to how we can use the copy command and insert a long or clob column into another table...

Tom Kyte
June 27, 2002 - 10:39 am UTC

and how to MOVE table with Long datatype

Lut Anna, December 05, 2002 - 10:00 am UTC

how I can
ALTER TABLE MOVE stmt. Table with Long datatype??

Tom Kyte
December 05, 2002 - 11:21 am UTC

by definition - you cannot.



using COPY command

Lut Anny, December 05, 2002 - 11:33 am UTC

using SQL COPY comman I get error
set arraysize 50
set long 100000
set set copycommit 20000

COPY FROM usr/pwd@MYSID CREATE tab2 USING SELECT * FROM tab1

I use simpl TO_LOB function

t2 (a CLOB, b number)
t1 (a LONG, b number)
insert into t2(a,b) select (to_log(a), b ) - Works GOOD!


Move Long to Long

Sean, July 16, 2003 - 7:02 pm UTC

It works.  Thanks so much for the info.    Sean 

SQL> create table t1(id number, text long);

Table created.

SQL> insert into t1(id , text)
  2  values(1, 'Hello');

1 row created.

SQL> commit;

Commit complete.

SQL> create table t2(id number, text long);

Table created.

SQL> set arraysize 100
SQL> set long 100000
SQL> set copycommit 1
SQL> copy from scott/tiger@winter insert t2(id, text) using select id, text from t1;

Array fetch/bind size is 100. (arraysize is 100)
Will commit after every array bind. (copycommit is 1)
Maximum long size is 100000. (long is 100000)
   1 rows selected from scott@winter.
   1 rows inserted into T2.
   1 rows committed into T2 at DEFAULT HOST connection.

SQL>
 

LONG to BLOB

Steve, September 06, 2003 - 4:51 pm UTC

Tom, you wrote:

"LONGS goto CLOBS
LONG RAWS goto BLOBS

Makes no sense to move a long to a blob, we don't do that. "

Unfortunately, in our case, our Java develpers are insisting that we convert LONG to BLOB because when they tested CLOBs, there were issues with foreign characters.

Most records are less than 32K, so if that makes a big difference in your answer, we'll deal with the handful of records that are >32K.

We are on 8.1.7.4
Thanks.

Looking forward to seeing you at the NYOUG Day on Sep 23.
Are you going to talk about the 10g MODEL clause?

Tom Kyte
September 06, 2003 - 4:55 pm UTC

what issues do they perceive that they believe will be done differently with LONGS????????

give me something to work with here.


(i'll be talking about 10g in general, not the model clause specifically)

LONG to BLOB

Steve, September 08, 2003 - 12:32 pm UTC

Tom,
I sent an email to the Java developers about this, and here is their response (our DB characterset is WE8ISO8859P1):

"In an ideal world (a true UTF8 database), we would be able to use CLOB instead of BLOB to store large text data. But, because our database is configured with a Western European character set (meaning it doesn't know anything about Japanese, Chinese, etc), whenever we write text to either a VARCHAR2, LONG, CLOB, or BLOB field from a Java/JDBC program, we need to perform the following steps when writing to the database:

Convert Java String to a Java UTF8 byte array
Store the byte array in the database

When reading from the database, we do the reverse:
Read byte array from the database
Convert byte array to Java String using UTF8 character encoding

With VARCHAR2 and LONG fields, for writing we use the
PreparedStatement.setAsciiStream(..) method. For reading we use ResultSet.getBytes(...). This technique works in that the bytes we store in the database are the same as the bytes we read.

For CLOB fields, we can't read and write the same was as we do with VARCHAR2 or LONG fields. To write to a CLOB in JDBC, we use:

CLOB clob = (CLOB)pstmt.getClob(3);
OutputStream os = clob.getAsciiOutputStream();
os.write(byteArray);

To read from a CLOB, we use:

ResultSet rs = pstmt.executeQuery();
rs.next();
Clob clob = rs.getClob(1);
InputStream is = clob.getAsciiStream();

The problem is that the CLOB reading/writing routines attempt to perform character conversion when reading or writing using the OCI JDBC driver, which causes corruption of foreign characters. This corruption does not happen when reading and writing strictly using the Thin JDBC driver, but I don't think we should restrict ourselves to using the Thin driver only.

The long term solution is to go to a true UTF8 database, where the database character set is UTF8, and no funny conversion occurs in the database. This would also involve converting all existing data to UTF8, which is not a trivial task. Then, we would be able to use CLOBs to store large text.
The short term solution is to use BLOBs instead of CLOBs, since BLOBs do not perform character conversion."

What do you make of this?


Tom Kyte
September 08, 2003 - 12:45 pm UTC

use a blob then -- use a blob.

it'll be trivial. they just stream into a blob.

that is what they really have here -- a blob. (RAW BINARY DATA).

I tried your example and get this error

Sanjay Jha, October 02, 2003 - 4:23 pm UTC

What is the problem here Tom?

SQL> create table foo 
  2  ( The_Whole_View   varchar2(65), 
  3    TextLength       number, 
  4    TheText          Long ) 
  5  /

Table created.

SQL> copy from ci_user/ci_user@cica_dev insert foo (the_whole_view, textlength, thetext ) using sele
ct owner||'.'||view_name, text_length, text from all_views;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)

ERROR:
ORA--3924922: Message -3924922 not found;  product=RDBMS; facility=ORA
 

Tom Kyte
October 02, 2003 - 6:48 pm UTC

you can try PUSHING instead of PULLING to see if that works around.

copy to ...

It worked...

Sanjay Jha, October 06, 2003 - 12:33 pm UTC

I am not sure whether one has to provide user/password@connect-string combination twice, even though it is for the same database..., but when I did that way, it worked..

SQL> COPY FROM ci_user/ci_user@cica_dev TO ci_user/ci_user@cica_dev insert foo (the_whole_view, text
length,thetext ) using select owner||'.'||view_name, text_length, text from all_views;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
   1198 rows selected from ci_user@cica_dev.
   1198 rows inserted into FOO.
   1198 rows committed into FOO at ci_user@cica_dev.

Question:
How COPY command works..is it different from normal DML? 

Tom Kyte
October 06, 2003 - 1:50 pm UTC

copy is just a sqlplus "command" that is translated into a "select" and an "insert"

sqlplus is just doing:


for x in (select * from t1)
loop
insert into t2 values ....
end loop

logically.

Am I delirious?

Oleksandr Alesinskyy, December 04, 2003 - 11:57 am UTC

I have some vague reminiscence that older versions of Oracle (7.2? 7.3?) have DBMS_LONG package that allows piecewise processing of LONG data from PL/SQl. Am I delirious or this package has silently gone?

Tom Kyte
December 04, 2003 - 12:27 pm UTC

it is dbms_sql that allows (current tense) for piece wise READ access to longs.

never write.

Copying table with LONG datatype.

Sai., February 23, 2004 - 3:38 pm UTC

Hi Tom,

I've been waiting to get a slice of your time to ask a question for the past few weeks.

I would really appreciate if you help me with this follow up question:

We have a 150G table with a LONG datatype column, we need to migrate this db from 2k to 8k block size. The maximum donwtime we can get is 12 hours. We are thiking of using combination of shareplex and standby database for this purpose.

Another plan was to copy that 150G table in about 6 hours, sqlplus copy for sure was taking more than 6 hrs. We have left with Perl DBI, Oracle OCI, or Pro*C.

I would like to parallelise this process by using bunch of OCI threads based on rowid ranges and insert into new table with lots of freelists.

Can you please help me write a small OCI program to copy a table with LONG datatype column using APPEND hint in the insert operation.

Thanks a lot.

Tom Kyte
February 23, 2004 - 5:04 pm UTC

well, my question would be "why do i NEED to migrate from 2k to 8k"

and why couldn't i migrate the long to a clob..... so I can just use alter table move in the future to move from tablespace to tablespace (and change the blocksize on the fly)

and sigh, never a version...


If I had a 150gig table, with a long I'd be looking to fry all of my fish at the same time -- like "we need a clob, we need partitioning to be able to maintain this in the future" as well.

Re: Copying table with LONG datatype

Sai, February 23, 2004 - 6:15 pm UTC

Hi Tom,

Thanks for your help. That is the big constriant from our third party vendor application, we got no control atleast for now. We will eventually migrate LONG column to CLOB datatype when we have new application version available.

Can you please help me with writing a OCI program to copy a table with LONG datatype using APPEND hint (it would be great, if we have something like BULK insert in Pl/sql).

Thanks for your time.

Tom Kyte
February 23, 2004 - 6:56 pm UTC

version
version
version
version

and why "2k to 8k", what is the motivation. perhaps this move is "not going to do very very much"

whats the science behind the change?

Re: Copying table with LONG datatype

Sai, February 23, 2004 - 11:18 pm UTC

Hi Tom,

We are on 64 bit Oracle Enterprised version 8.1.7.4

Our troubled third party vendor application does too many full table scans of huge tables, we don't have any way of changing their code. Apart from that, all of our other databases are on 8K, it was made as a stadard at our company to create all 8K blocksize database's. We grow at the rate of 100% every couple of years. Our Architect's made that decision. Migration to 8K blocksize db would improve full table scans, and much reduced row chaining/migration. When we get newer application version later this year, we wouldn't have many of these problems.

Perfect logic/science doesn't really work all the time at each and every big companies conflicting to their standards made by architects.

Would you please help me write that OCI program. Your help is very much appreciated.

Thanks.

Tom Kyte
February 24, 2004 - 6:43 am UTC

OCI isn't the way to go. Too much work.


I'd look at using sqlplus copy command (run in parallel, slice the table up by rowid ranges or primary key ranges)

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:10498431232211 <code>

I'd look as exp/imp (run in parallel again, slice the table up and use query= on the exp command line)

I'd look at "prove to me that a) setting db_file_multiblock_read_count = 4xNORMAL won't have the same effect as going to an 8k block AND b) prove to me that we'll reduce row chaining (longs will chain in 8k as well as 2k if they exceed the block size before I waste a couple 10's or 100's of thousands of dollars in company time doing this "just because")


those longs are going to be problematic since they will defeat attempts at array processing. You'll just have to parallelize it.

Counting migrated rows.

Sai, February 24, 2004 - 9:49 am UTC

Hi tom,

Thanks so much for your help on this. One last question:

In one of your articles you said, full table scan would not go to forwaring rowid when it hits a migrated row, becuase it will eventually get there. Does this work similarly when we parallelise jobs using rowid ranges, i.e., does the same row fetched twice in different rowid ranges?

Thanks

Tom Kyte
February 24, 2004 - 12:44 pm UTC

as far as I know, yes, never really setup a test to "prove" it tho.

Deprecation of sqlplus COPY command

Mat, April 15, 2005 - 6:54 am UTC

I've just found myself using the COPY command in sqlplus to transfer tables with LONG columns (I was trying to compare LONG data stored in the data dictionary that stored in another DB).

I note that the 9.2 and 10G state that this command will be deprecated in the future.

Does this mean that Oracle will no longer provide any means to carry out this kind of operation in the future?

Will Oracle continue to use the LONG datatype to store some dictionary data?

Regards,

Tom Kyte
April 15, 2005 - 9:18 am UTC

it has been saying that for a while, yes.

data pump can be used to do this in 10g nicely as well.

humaira, September 29, 2005 - 12:35 pm UTC

Hello Tom,

I am again stuck with some migration problem.
We are migrating the data from a 8i to a 9i database. One of the table has a long column and we need to migrate that as well.
Now the problem is we cannot convert long column to CLOB or anything else for that matter as the business rule don't allow it.
We used a trigger and in that we used select staement with a dblink. But this does not get the long data column giving an error ora-00600 internal error code.
Hence we have to migrate it by some other means.
Is this possible? If yes, then how?

Tom Kyte
September 30, 2005 - 8:21 am UTC

export and import.

How to serach a string with LONG datatype column?

A Reader, August 08, 2006 - 9:13 pm UTC

Hi Tom,

We have an old database which has some LONG datatype columns. I want to query a table with LONG column by using where LONG_COLUMN like '%my_string%', I got the following error:

ORA-00932: inconsistent datatypes: expected NUMBER got LONG

How can we search for a specific string in LONG column?

Thank you for your help.

Tom Kyte
August 09, 2006 - 10:36 am UTC

you can index it using Oracle Text.

and then using contains() on it.

A reader, October 13, 2006 - 2:54 am UTC


Insert Long type variable into CLOB

Sanjay, February 18, 2013 - 8:14 am UTC

Hi Tom,

i have procedure in which a variable declared as LONG datatype.

in the code they have dynamically formed a query string and stored into this LONG datatype variable. Now i want to debug the query string so i tried to store this LONG variable into a table with CLOB datatype column.
But i am getting error.

Error: PL/SQL: ORA-00932: inconsistent datatypes: expected LONG BINARY got CHAR
Line: 2566
Text: 'MyMAPPING',To_Lob(v_Final_String),'2'

giving sample pl sql block as actual block might run in pages.


Declare
v_Final_String Long
Begin

v_Final_String := 'Insert into emp(Empno,Ename) values('||vEno||','''||vEname||''');' ;

Insert into Temp(vclob) Values(to_clob(v_Final_String));

Execute immediate v_Final_String;

End;
/




Tom Kyte
February 19, 2013 - 9:48 am UTC

wow, can you spell sql injection - I hope so because your code is going to be full of it, very very very full of it. this is some scary stuff. very very scary.

non-scalable (no binds, I'm crying on the inside, totally)
excessively insecure


sigh... it makes me sad to see this day after day after day... over and over....


insert into temp(vclob) values ( v_final_string );


to_clob only works in an insert as select or create table as select where the source DATABASE column is a long. It is not used on the long subtype in plsql (which is really just a varchar2(32760).


More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here