Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Ramkumar.

Asked: November 27, 2005 - 11:29 pm UTC

Answered by: Tom Kyte - Last updated: March 19, 2012 - 9:19 pm UTC

Category: Database - Version: 9.2.0.4

Viewed 50K+ times! This question is

You Asked

Hi Tom,


We have a query which has to get a set of rows over the db link which has a CONTAINS predicate on a column of CLOB.

E.g.

There is a DB A and DB B with table T1( c1 varchar2(10),c2 clob) .

I want to run a query from DB A using dblink on DB B to query T1 which has a CONTAINS predicate on c2 and based on the query the rows have to return c1 from t1.

Can you suggest anyway in which we can get this done. We have a couple of restrictions though:

WE cant insert into DB A from DB B.
Table T1 contains around a 100 thousand rows and has around 20 columns.

Thanks

Ramkumar

and we said...

I do not believe that will be possible, that will raise

ORA-22992: cannot use LOB locators selected from remote tables


However, if you create a view on the remote site:


ops$tkyte@ORA10G> create table t ( x varchar2(10), y clob );

Table created.

ops$tkyte@ORA10G> insert into t values ( 'hello', 'some text here some more text' );

1 row created.

ops$tkyte@ORA10G> commit;

Commit complete.

ops$tkyte@ORA10G> create index t_idx on t(y) indextype is ctxsys.context;

Index created.

ops$tkyte@ORA10G> create or replace view v
2 as
3 select x from t where contains( y, sys_context('userenv','client_info') ) > 0;

View created.


You will be able to accomplish this via:

ops$tkyte@ORA9IR2> exec dbms_application_info.set_client_info@ora10g( 'text' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select * from v@ora10g;

X
----------
hello



you'll remotely execute the dbms application info call to populate the remote client_info and then the view, which HIDES the lob, will return the character strings you want...



and you rated our response

  (20 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

fetch clob over dblink

November 02, 2006 - 11:07 am UTC

Reviewer: A reader

hi Tom, my question here is, how colud we fetch over a clob field, with length longer than 32K?

i am having a hard time now to resolve the issue, should i convert clob to varchar2 on the remote machine first, i only can fetch field upto the limit of varchar2, ie. 4000

how come Oracle allow Long field over dblink (even Long type already considered as something of legacy), while do not allow clob over dblink?

thanks

Tom Kyte

Followup  

November 02, 2006 - 12:11 pm UTC

you can fetch a clob over a link, but in short, one easy approach is:

create global temporary table gtt( x clob ) on commit delete rows;

just insert into that your clob and make it "local" so you can use the local dbms_lob functions on it (else you have to use dbms_lob@remote_site to work on it)

Diff tables with CLOB on two databases

September 27, 2007 - 6:15 pm UTC

Reviewer: A reader

Tom,

We have replicated tables with CLOB columns using Advanced Replication. 

So, we need to diff the same table on local and remote DB.

Q1: Is there a way to diff a table with CLOB column across dblink? (copy over to a local global temp table as you mentioned in this thread?)

Q2: Even two local tables, I still can't compare the CLOB columns. This is what I tried:

create table t1 (x varchar2(10), y CLOB);
create table t2 (x varchar2(10), y CLOB);

insert into t1 values  ('hello','please compare me');
commit;
insert into t2 values  ('hello','am I different');
ommit;

SQL> select * from t1 minus select * from t2;
select * from t1 minus select * from t2
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got CLOB

Apparently, the SET operators (minus) does not support CLOB either.

Thanks, 

Tom Kyte

Followup  

September 28, 2007 - 5:17 pm UTC

clobs are not comparable in that sense, but you could:

ops$tkyte%ORA10GR2> create table x ( clob_val clob );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          l_clob clob;
  3          l_data long := rpad( '*',10000,'*' );
  4  begin
  5          insert into x values ( empty_clob() ) returning clob_val into l_clob;
  6          for i in 1 .. 20
  7          loop
  8                  dbms_lob.writeAppend( l_clob, length(l_data), l_data );
  9          end loop;
 10  end;
 11  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> exec dbms_output.put_line( dbms_crypto.hash_md4 );
1

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select dbms_lob.getlength(clob_val), dbms_crypto.hash(clob_val,1) from x;

DBMS_LOB.GETLENGTH(CLOB_VAL)
----------------------------
DBMS_CRYPTO.HASH(CLOB_VAL,1)
-------------------------------------------------------------------------------
                      200000
6B0289A886ECC7541F80DB90FEE583E1



compute the length and the hash - and compare those.

Thanks!!

October 01, 2007 - 10:07 pm UTC

Reviewer: A reader

Tom, you are the best!

compare the CLOB crypto value cross dblink

November 01, 2007 - 7:34 pm UTC

Reviewer: A reader

Tom,

To compare the dbms_crypto.hash(clob_column,1) value of the clob_column over table@dblink, I will have to save the remote table@dblink crypto.hash value to a 3rd table first. Should I define in the 3rd table the crypto column as RAW(32), RAW(64), RAW(4096), or?? 

See my example as follows:

10gR2_DB1> create table resume (
  name    varchar2(30),
  resume  clob);

(** Create the same table on the remote database DB2, also)

10gR2_DB1> create table diff_crypto (
    name varchar2(30),
    length number,
    crypto RAW(32)
 );

==> Q: Is RAW(32) the correct data type for crypto column?


10gR2_DB1> insert into diff_crypto
2  select name, dbms_lob.getlength(resume),dbms_crypto.hash(resume,1)
3  from resume;

commit;

Then, from DB2, I can compare the tables over dblink:

10gR2_DB2> select name, dbms_lob.getlength(resume) length,
           dbms_crypto.hash(resume,1) crypto 
           from resume
           minus
           select name, length, crypto from diff_crypto@DB1;

So, RAW(32) appears to work fine, but is length of (32) correct?  

Thanks for your help!
 

clob

June 23, 2008 - 7:44 pm UTC

Reviewer: A reader


clob

December 20, 2008 - 11:25 am UTC

Reviewer: A reader

Tom:

Based on what you do below, do y oucreate this temprary table in local DB and any commit will delete the record from it?

I have a similar situation but I have DB "A" and "B". B is web enabled and user wants to see the CLOB on the web page. Do i create a temp table in B and everytime a user views the page copy the record over. Is this how you would do it ? on demand copy from A--->B for the CLOB since the file might change in A

<<you can fetch a clob over a link, but in short, one easy approach is:
create global temporary table gtt( x clob ) on commit delete rows;>>
Tom Kyte

Followup  

December 29, 2008 - 2:19 pm UTC

if the global temporary table is on commit delete rows - it will remove the data upon commit by definition - yes.


if you need to access and process the clob on B - which is sounds like "you do", you would copy it over, yes.



(smk, how many times have we had this particular discussion.... many. many many times)

This is my test, just another example.

September 21, 2009 - 2:52 pm UTC

Reviewer: Ezequiel Russo from Argentina

Hi all, I just needed to use this solution, and I thought it would be usefull to make this test available to everyone.

SQL> create table remote ( x varchar2(10), y clob );
Table created.

SQL> insert into remote values ( 'hello', 'This is a test');
1 row created.

SQL> commit;
Commit complete.

Then I create a Temporary Table.

SQL> create global temporary table gtt on commit delete rows as select y from remote@dblink;
Table created.

SQL> select * from gtt;
no rows selected

SQL> desc gtt
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 Y                                                  CLOB

I populate the temporary table (local) with the CLOB data from the remote database.

SQL> insert into gtt select y from remote@dblink;
1 row created.

Now, I have the CLOB data locally, at my temporary table.

SQL> select * from gtt;

Y
--------------------------------------------------------------------------------
This is a test

September 25, 2009 - 7:14 am UTC

Reviewer: A reader


To: Ezequiel Russo, works fine !

December 03, 2009 - 6:58 am UTC

Reviewer: Markus from Germany

Hi Ezequiel Russo,

thanks for sharing your solution.
Global Temp-Table works fine !

Great artcile on querying CLOB over DBLINK

February 24, 2010 - 4:05 pm UTC

Reviewer: Dina from CA USA

great article. Helped me solve a streams issue. I had to sync a table that was having CLOB data that was out of sync in streams. I used the example to solve that problem.

"Fetching CLOB column over DB Link",

April 13, 2010 - 5:19 am UTC

Reviewer: Alok Bhatnagar from INDIA

It is remarkable article. Very very useful… Thanks a lot Tom.. You are great.

PL/SQL function returning clob over dblink

February 02, 2012 - 8:23 am UTC

Reviewer: Soni from NJ, USA

Hi Tom,

From my database (DB A) I tried to use the following method to call a function [Package.F1@dblink(p1,p2)] on remote database (DB B) which returns a CLOB containing XML:

create global temporary table gtt( x clob ) on commit delete rows;

insert into gtt (x) (select Package.F1@dblink(p1,p2) from dual@dblink);

and I get the error:

ORA-06553: PLS-564: lob arguments are not permitted in calls to remote server.

I have very restricted access to database B .. limited to calls to some functions over the dblink. So i will not be able to create any views on database B.

Please advice.

Thanks,
Soni
Tom Kyte

Followup  

February 02, 2012 - 9:05 am UTC

with extremely limited access on the remote site, you are extremely limited in what you'll be able to do. This will get it 4000 bytes at a time, but requires re-running the function over and over:

ps$tkyte%ORA11GR2> declare
  2          l_clob    clob;
  3          l_tmp     long;
  4          l_offset  number := 1;
  5  begin
  6          loop
  7                  select dbms_lob.substr@loopback@ora11gr2( foo@loopback@ora11gr2, 4000, l_offset )
  8                    into l_tmp
  9                    from dual@loopback@ora11gr2;
 10  
 11                  exit when l_tmp is null;
 12                  l_clob := l_clob || l_tmp;
 13                  l_offset := l_offset + length(l_tmp);
 14          end loop;
 15          dbms_output.put_line( 'l_clob length is ' || length(l_clob) );
 16  end;
 17  /
l_clob length is 320000

PL/SQL procedure successfully completed.




foo is my remote function that returns a clob.


maybe you can convince the remote site's 'owner' that in order to not squash their database with lots of repeated calls, you need the ability to create a global temporary table and function over there - so you can put the clob into a temp table and then retrieve it. They might like that idea better than you calling the function over and over.

storing as clob

February 08, 2012 - 12:04 pm UTC

Reviewer: sat

per my requirments, I have to read a clob column and store it as clob. while storing as clob I have to find the specific strings ( Say ex : '<SELECT>' ) to </SELECT> ) and what ever the data exists need store in the table.

How do you acheive this . any help is appericiated

Thanks

Tom Kyte

Followup  

February 08, 2012 - 12:28 pm UTC

I have to read a clob column and store it as clob.

if you read a clob - isn't it already stored? as a clob?



You would read the clob just like anything else, parse it - using regexp or instr functions, substr it and retrieve what you need. Standard programming stuff.

PL/SQL function returning clob over dblink

February 17, 2012 - 1:45 pm UTC

Reviewer: Soni from NJ, USA

Hi Tom,

Thanks a lot for your response (dated: Fed 2, 2012) . I am currently using the method you suggested to get the CLOB from remote database. But it is a slow process.

In your response you also mentioned that:
"maybe you can convince the remote site's 'owner' that in order to not squash their database with lots of repeated calls, you need the ability to create a global temporary table and function over there - so you can put the clob into a temp table and then retrieve it. "

Before putting up this request in front of the higher-ups in my organization I wanted to confirm with you that is it sufficient to just get the permission for creating a Global Temporary Table or would it be necessary to get the permission to create a function over there as well.
From my local database, would I be able to insert into the Global Temporary Table(at remote database), the CLOB returned by the function at remote database.

Thanks,
Soni

Tom Kyte

Followup  

February 17, 2012 - 9:07 pm UTC

it would be best if you could create a function over there to do the insert - so you can run foo@loopback@ora11gr2 and insert it into the table without jumping through hoops.

Otherwise, what you'll need to do is execute a block of plsql code remotely using dbms_sql@remote_database - it'll be "doable" but "ugly as sin" and would require multiple calls over the database link to open a cursor, parse a statement, bind, bind bind to it, execute it, and close the cursor.

will this work?

March 15, 2012 - 5:33 pm UTC

Reviewer: A reader from CAN

hello,

i have a requirement to create a view (not a materialized view, not a temp table) on a local database that fetches info, including CLOB data, from a remote database. data from this view is read by our application, and used to make inserts into various tables. an assumption is that the CLOB data will never exceed 12000 bytes.

to get around the CLOB over database link issue, what i've done is create 3 views as follows:

-- on remote database
-- create a table with CLOB column, and insert some data
create table foo (bar number, bazclob clob);

insert into foo select rownum as bar, to_clob(rpad('x',4000,'x'))||to_clob(rpad('x',4000,'x'))||to_clob(rpad('x',4000,'x')) as bazclob from dual connect by level < 11;

select bar, dbms_lob.getlength(bazclob) from foo;

BAR DBMS_LOB.GETLENGTH(BAZCLOB)
---------- ---------------------------
1 12000
2 12000
3 12000
4 12000
5 12000
6 12000
7 12000
8 12000
9 12000
10 12000

-- create view that chunks CLOB column into 1000 byte varchar columns
create view vw_foo as
SELECT bar
,dbms_lob.substr(bazclob, 1000) as bazclob1
,dbms_lob.substr(bazclob, 1000,1001) as bazclob2
,dbms_lob.substr(bazclob, 1000,2001) as bazclob3
,dbms_lob.substr(bazclob, 1000,3001) as bazclob4
,dbms_lob.substr(bazclob, 1000,4001) as bazclob5
,dbms_lob.substr(bazclob, 1000,5001) as bazclob6
,dbms_lob.substr(bazclob, 1000,6001) as bazclob7
,dbms_lob.substr(bazclob, 1000,7001) as bazclob8
,dbms_lob.substr(bazclob, 1000,8001) as bazclob9
,dbms_lob.substr(bazclob, 1000,9001) as bazclob10
,dbms_lob.substr(bazclob, 1000,10001) as bazclob11
,dbms_lob.substr(bazclob, 1000,11001) as bazclob12
from foo;

-- on local database
-- create a staging view to "materialize" (using rownum) the chunked CLOB data from remote db
create view vw_local_foo_stage as select v.*, rownum as rnum from vw_foo@remote_db v;

-- finally, the finished view puts the chunks back together as a CLOB
create view vw_local_foo as
select v.bar
,to_clob(v.bazclob1)||
to_clob(v.bazclob2)||
to_clob(v.bazclob3)||
to_clob(v.bazclob4)||
to_clob(v.bazclob5)||
to_clob(v.bazclob6)||
to_clob(v.bazclob7)||
to_clob(v.bazclob8)||
to_clob(v.bazclob9)||
to_clob(v.bazclob10)||
to_clob(v.bazclob11)||
to_clob(v.bazclob12) as bazclob
from vw_local_foo_stage v
order by v.rnum;

-- check my local view
select bar, dbms_lob.getlength(bazclob) from vw_local_foo;

BAR DBMS_LOB.GETLENGTH(BAZCLOB)
---------- ---------------------------
1 12000
2 12000
3 12000
4 12000
5 12000
6 12000
7 12000
8 12000
9 12000
10 12000

is this a bad kluge that i should avoid, or will it work for my requirements?

thanks,
yuin
Tom Kyte

Followup  

March 16, 2012 - 8:11 am UTC

why did you need rownum and more than one view?


March 16, 2012 - 11:53 am UTC

Reviewer: A reader

i'm using 2 views, and rownum, on the local database because when i tried using just a single view (and no rownum):

create view vw_local_foo as
select v.bar
,to_clob(v.bazclob1)||
to_clob(v.bazclob2)||
to_clob(v.bazclob3)||
to_clob(v.bazclob4)||
to_clob(v.bazclob5)||
to_clob(v.bazclob6)||
to_clob(v.bazclob7)||
to_clob(v.bazclob8)||
to_clob(v.bazclob9)||
to_clob(v.bazclob10)||
to_clob(v.bazclob11)||
to_clob(v.bazclob12) as bazclob
from vw_foo@remote_db v;

i keep getting this error when i select from the view:
select * from vw_local_foo;

ORA-22992: cannot use LOB locators selected from remote tables
22992. 00000 - "cannot use LOB locators selected from remote tables"
*Cause: A remote LOB column cannot be referenced.
*Action: Remove references to LOBs in remote tables.
Tom Kyte

Followup  

March 16, 2012 - 12:16 pm UTC

using rownum was forcing it to create a global temporary table under the covers, copy the entire table from remote to local and then query.

are you really really sure you want to do this for every query you run?

How about you tell us the requirement from the perspective of the application and then I can tell you how best to do that (and it might well require a global temporary table instead of a view or a stored procedure or something else)

March 16, 2012 - 12:49 pm UTC

Reviewer: A reader

the requirements from application side is like this:

the application runs against either an oracle or sql server database (i'm only dealing with the oracle side here). a user can manually (or via a scheduler) submit a request to "synchronize" data from a remote data source. when such a request is made, the application will
1. read data from a view that corresponds to the data source
2. manipulate the data
3. insert/update into appropriate tables

so it's the view in step 1 that does the job of providing remote data.
Tom Kyte

Followup  

March 16, 2012 - 4:07 pm UTC

why wouldn't you just use the already builtin replication capabilities?



March 16, 2012 - 5:08 pm UTC

Reviewer: A reader

by replication do you mean materialized views, oracle streams, or both?

our application is installed at customer sites; the customer provides a database (oracle or sql server) against which the application runs. we don't have much leeway in telling the customer what database features they need to have configured. and to complicate things more, our application has to be able to run on both oracle 10g and 11g, and oracle editions SE to EE. so whatever database feature we do request of our customers to configure, we need to make sure they are available for all of these versions/editions, and preferably is not an additional cost option (such as partitioning). basically our application has to work with the lowest common denominator; not fun but that's the way it is.

so it's really our customers who have to set up the data source views required by our application's synchronization function. my example of setting up the 3 views to fetch CLOB across database link is basically a "proof of concept" for customers on how they might most simply set up such views. is there an oracle replication feature that lends itself to what i've described?

thanks
Tom Kyte

Followup  

March 19, 2012 - 9:53 am UTC

either, or, both - all could apply. You are doing what is called "replication", you are re-inventing a wheel that has been invented for decades.

we don't
have much leeway in telling the customer what database features they need to
have configured.


I beg to differ, you guys "integrate" with each other, of course you do.

Investigate Read Only Materialized views - available since version 7.0 and supported on SE on up.

It would be a lot better than pulling an entire table over a dblink perhaps - a simple materialized view log on the client site and just get the changes.



Tell me also: are you relying on a timestamp to find "new rows"?

March 19, 2012 - 4:19 pm UTC

Reviewer: A reader

thanks, i'll take a look at read only materialized views.

yes, the application does look at a timestamp column in order to determine whether to update an existing record in the local db with a row from the data source view. why? is that a bad idea?

Tom Kyte

Followup  

March 19, 2012 - 9:19 pm UTC

think about it...


think about read consistency
think about when the timestamp is set
think about reads not blocking writes


you set the timestamp when you modify the record (NOT when you commit, when you modify it - the commit happens LATER).

ok, let's say you refreshed the table last at 12:00. You would have recorded in some table somewhere "we are refreshing at 12:00". Then you would have refreshed.

The next time you refresh, you read that table and say "last refresh was 12:00, need to pull everything that happened since 12:00"

But what if at 11:59:59 a transaction did an update to a row and timestamped it 11:59:59. But that transaction did not commit until 12:00:01?

Your 12:00:00 refresh would not see it because it did not happen, was NOT committed (and you would not block waiting for it in Oracle - in sqlserver - probably you would, unless you were using their read consistent new option).

Also, your next refresh would not see it because - IT WAS NOT timestamped at or after 12:00:00.

So, you would miss it.

Now what?

(this is why I asked you to think about it - a timestamp refresh is NOT as simple as it sounds)

DBLINK issue with CLOB

January 10, 2013 - 7:57 am UTC

Reviewer: A reader

Hi TOM,

it was really good suggestion creating global temp table for dblink issue with CLOB.

simply great !!!

thanks a ton!!!

vinay [v]

More to Explore

DBMS_APPLICATION_INFO

More on PL/SQL routine DBMS_APPLICATION_INFO here