Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Justin.

Asked: December 19, 2002 - 3:44 pm UTC

Last updated: March 01, 2010 - 9:07 am UTC

Version: 8.1.7

Viewed 1000+ times

You Asked


Hi Tom,

I have to two databases and what i need to do is synchronize 5 tables of one database with other database.

Tom do you suggest i should go with replication or i should create a link with trigger on these tables so that these tables perform DML on each other.

Thanks.


and Tom said...

easy question, simple answer


use replication -- don't re-invent the wheel.




Rating

  (66 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, March 25, 2003 - 6:53 pm UTC

Hi Tom,

We have Oracle 8.1.5.0 running on Sol 8 and we are in the process of upgrading to 8.1.7.4 to another server Sol 8.
Our Sr. DBA has suggested replicating as we cannot shutdown our d/b to upgrade, am not sure if this is the right approach, can you pls. comment on this.

Thanks.





Tom Kyte
March 25, 2003 - 8:45 pm UTC

are you at or near capacity on the 815 server?

have you tested on your test system what replication will do to you? and your performance?

You can minimize your downtime but for a system upgrade like this, I believe you will have some (even with replication -- if you spend the next 6 months testing it -- you will have downtime).

This is bigger then a breadbox. I've been down this route with some customers already. It takes hours of discussion. My advice -- you need to thoroughly -- off to the side -- TEST THIS 100%.

Consider the downtime you will incur if you don't.


A reader, March 26, 2003 - 9:01 am UTC

Tom,

Acoording to you,which will be the best approach of upgrading, so that we will have minimum downtime.

Thanks.

Tom Kyte
March 26, 2003 - 9:13 am UTC

practice, practice, practice.

I was working with a customer who was able to tell me that it would take them 57 minutes from start to finish -- why? they had done it over 1/2 dozen times -- practiced it like a fire drill. Had it down pat. Shaved every second off possible.

Did this take time?

Yes -- they spent a good 4 weeks setting up for this.

Did the upgrade go off without a hitch?

Yes -- they knew exactly what to do, when to do it, how to do it and had fully tested the application code before hand. They hit (and fixed) problems in test before they became a problem in production (there is no such thing as a P1 tar in test you know -- they happen when you do something in production that messes it all up)



A reader, March 26, 2003 - 9:19 am UTC

Thanks Tom!!

Question on data replication

PRS, March 26, 2003 - 9:19 am UTC

Hi Tom,

I have a schema(SNAP) with all objects defined as a read only snapshots
with inceremental refresh. We do refresh all these snapshot early
morning/late night in this schema. Source instance has snapshot logs
defined for each table.


My question is I have a different schema(ADM) where I need to push this
data in a different table structure. One snapshot can map to more
than one tables in this schema. I could see two soltions for this.

1. Define an after inser/update/delete trigger on each snapshot(in SNAP schema)
and log those id's in some table. Then write a procedure to push
that data into the more than one table in ADM schema. This seems to be
a good option. But ORACLE says we do not gurantee that trigger on a snapshot
will always execute 100%. ORACLE says do not define any trigger on the internal
objects such as snapshot. This is the response I got it from TAR.

2. Write a procedure to compare the difference in data and push tat data into
more than one table in ADM schema. But this is the moset expensive option as
everyday there will be atleast 100,000 rows to pickup for each table. Each table
volume is 3.5 to 4 million rows. So I think this is not a right option.


Can you suggest any other valid option?

Thanks,
PRS

Tom Kyte
March 26, 2003 - 9:34 am UTC

1) that is not a good idea. The snapshot refresh mechanism is a wierd beast and changes from release to release. read only snapshots do not support triggers, this would not be a suggested course of action

2) Why would ADM not just create snapshots using horizontal and vertical slices of the source data itself??

Need some info

PRS, March 26, 2003 - 12:23 pm UTC

I cannot define snapshot in ADM schema as one row of one table in SNAP schema can map to one row in many tables in ADM schema. So You have to have some kind of procedural logic to do that.
I just want to capture the daily changed(insert/update/delete) records to log in to some table use that table as a driving table to load the data into ADM schema.
We are using ORACLE9i release2. I heard that with the use of oracle stream you define as a table in SNAP schema and you apply the changes through ORACLE STREAM from source to destination database. And as long as I define that as a table in SNAP schema I can have the trigger also on it.

Please shed some light on ORACLE STREAM option here.

Thanks
PRS

Tom Kyte
March 26, 2003 - 4:26 pm UTC

yes, streams can do that -- with your help, you'll get logical change records and can procedurally process them.

</code> http://docs.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-STR <code>



question

A reader, March 26, 2003 - 2:39 pm UTC

please see the question above

Tom Kyte
March 26, 2003 - 4:33 pm UTC

why? oh I guess i'm too slow today.... anyway -- i don't sit here all day, catch as catch can.

(funny, but in my interface -- your review is the top, looking up I see my wall ;)



sorry

A reader, March 26, 2003 - 4:44 pm UTC

I do not mean that way.

Question

PRS, May 08, 2003 - 12:07 pm UTC

Hi Tom,
I have a database in 9i(9.2) which has a schema of read only snapshots comming from ORACLE8i(8.1.7). We refresh the read only snapshot in 9i instance incrementally twice a day.

I want to replicate this snapshot data into some other tables in 9i database by writing a procedure or triggers on a snapshot. But oracle does not support trigger on a read only snapshot. So I am exploring the idea of using the oracle stream.
So I am creating a seperate small database(A) with ORACLE9i(9.2) and define the read only snapshot in to that database A. Then setup an ORACLE stream from Database A to the Database B where you define snapshot as a table.
So I can have trigger on table in database B and I can move the data in a different table structures. So question is

can I define an ORACLE STREAM on read only snapshot in database A point to tables with the same structure in Database B?

Your reply is appreciated.
Thanks,
Prashant Shah

Tom Kyte
May 09, 2003 - 11:54 am UTC

why would you want to replicate that data into the same database?

PRS, May 09, 2003 - 1:21 pm UTC

Data are in different structure. We are using peoplesoft CRM application. It has it's own customer data model. So we are trying to fit our current CDM model into peoplesoft CDM model.

Tom Kyte
May 09, 2003 - 3:08 pm UTC

yes, you can use streams to do the ETL'ing for you. you would set up a custom apply for the logical change record and do whatever you wanted to it.

Example of stream

PRS, May 14, 2003 - 1:46 pm UTC

Hi Tom,
I did not see any example of stream on this site.
Do you have any example showing the concept of oracle stream? If you have it, can you please post it to this site?

Thanks,
PRS

What is the advantage of using stream over replication?

Yong, June 25, 2003 - 6:03 pm UTC

Tom,

Both stream and replication can do the job above.

If one use Orace stream to capture any change to the database, it is almost the same as a standby database.

What is advantage of using stream over the replication in this case (except that stream can capture/apply to non-oracle database)?

Tom Kyte
June 25, 2003 - 8:58 pm UTC

no, standby is a little like streams, streams is not like standby. standby can use some of the streams technology -- but standby is lots more then streams.


(replication can apply to non-oracle as well in some cases)....

streams is the way forward, less intrusive, more flexible (rules, transformations, filters)

Yong, June 26, 2003 - 1:29 pm UTC

Got it!
Thanks,

Sreams

A reader, August 11, 2003 - 8:59 am UTC

Tom,
Can streams be used to replicate data to a set of tables if the destination data is in a xml file(say no DB here) or if the destination data is in a xml file in a table.column which is a blob.

Any other suggesstions other than streams is really appreciated.

Thank you

Tom Kyte
August 11, 2003 - 10:00 am UTC

you would write your own custom apply -- to maintain the XML.

so streams would

o capture the changes
o stage the changes
o invoke your custom routine to apply those changes to your XML document instance.

Streams

A reader, August 11, 2003 - 1:28 pm UTC

Thank you

October Sky

Andy, October 16, 2003 - 2:15 pm UTC

I have 3 DBs with almost identicial schemas. We plan using replication(2 ways) technology to make it virtually a single database. I am impressed that the flexibility(rule, transformation) of Streams. Will you suggest that we should go for Streams rather than 2-way replication in my situation? Will Oracle use Streams to replace replication in the long run?

Tom Kyte
October 16, 2003 - 5:31 pm UTC

i would suggest a single database in virtually every situation.


streams is the future direction for replication implementations.


i would seriously ask myself however "do i really want replication AT ALL, wouldn't a single database serve me infinitely better"

An example

Matt, October 17, 2003 - 3:48 am UTC

"i would suggest a single database in virtually every situation.
"

Can you please suggest a situation where you might not consider using multiple schemas in a single database instance?

Tom Kyte
October 17, 2003 - 10:02 am UTC



i'm at a loss for words. I guess I'd have to say "i don't have any such situations"


3rd party apps are about it. they sometimes need their own database as they may have specific version/setup requirements. Here the database is less a database then part of their black box.

But for my stuff ("my" meaning my companies stuff), nope, i'd be at a loss for words to describe a scenario whereby I would run more then one instance on a machine.

replication

venkat, October 28, 2003 - 1:52 am UTC

Tom,

We need to replicate around 40% of the database objects from the production site into another site for UAT (uni directional) . The users want "closer to synchronous replication". Which is the best way to achieve this? Oracle version is 8.1.7.4 on Sun solaris.

Regards,
Venkat



Tom Kyte
October 28, 2003 - 7:56 am UTC

read only snapshots.

see the replication guide.

replication

Venkat, October 29, 2003 - 4:48 am UTC

Tom,

Thank you very much. If all the objects of a schema need to be replicated, what is the mecahnism you like to recommend?

Regards,
Venkat

Tom Kyte
October 29, 2003 - 6:55 am UTC

use OEM to set up read only snapshots via a gui that'll let you do many objects at once.

replication

Tarun Babu, March 09, 2004 - 3:13 am UTC

Hi Tom,
We have our database x at a different location and y database at our location, which is development database and we are testing all the proc,pkgs and forms. We need two table t1 and t2 along with entire data from our x database into y. There is no dblinks etc. I am wondering how to get these table to y database. Can we generate any script file from it? Please help.



Tom Kyte
March 09, 2004 - 11:43 am UTC

exp userid=u/p tables=(t1,t2)

ftp the file (in binary mode) to them

let them imp it. (read about export/import in the server utilities guide)

replicate or not

Tarun, March 11, 2004 - 1:33 am UTC

Thanks tom for the reply.
I got the tables, by exp/imp.

can we call a PL/SQL procedure to write Transformations using Stream.

Ajeet, May 20, 2004 - 8:02 am UTC

Tom:
Can I use a PL/SQL procedure to do the transformation and Load into Target tables..I want to capture any dml changes in source tables -- then propogate these changes and finally apply to target tables in a datawarehouse schema which is a STAR schema..My Source tables are part of a normalized schema -- Have not seen any documentation in Stream documentation of 9i for such cases.You havet talked about "Custom Apply"..Did you mean using a PL/SQL function/procedure/package .
My Source and Target both instances are oracle 9i R2.
Any short example or pointer to a document will be a great help.
I am new to Streams - -know concepts and have tried few simple cases after reading to Stream documentation on Oracle 9i.

Thanks much

Tom Kyte
May 20, 2004 - 11:39 am UTC

did you read chapter 14 in the streams guide for 9ir2?

Yes - I read it.

Ajeet, May 20, 2004 - 11:49 am UTC

Tom:
I have read it.It does say that we can use custom apply but the code used for a pl/sql procedure is quite different than what we write..there are things like get_object etc which I have never seen and - I am not able to get more detail about it..also procedure is too small logic --it just do a plain insert.(page 15 and 16 of chapter 14).
It also handles insert/update as a seperate operation but in my case I have to do --lots of dml (insert/update/truncate etc) --even I want to use pipelined function as transformation routine before Loading DW from source tables..need to do Look up --so pretty much standard ETL which we do before loading a DW table from source system.
--
But I guess --you want to tell that I should try to understand more ..
I still ask for help --if you have some time.

Thanks and Regards,
Ajeet


Tom Kyte
May 20, 2004 - 12:22 pm UTC

streams is message based, you'll be processing a "stream", transforming "a message" and performing an operation based on that "message"

it is not "dump and load" technology -- you won't be doing truncates and other things using this - it is a "stream of data". If you want to do the dump and load, you would just let the original apply apply the data to a stage table, ETL that as before and bulk load it -- but that wouldn't be a really good use of this. You might need to change the way you are thinking about this problem if you want to use this -- else you are back to "dump, stage, etl, load" rather than "mine redo, transform message, apply in real time"




Why can not we marry 2 -- that is Stream and Transform/Load

Ajeet, May 21, 2004 - 7:58 am UTC

Tom - very clear..You taught me the difference between stream and traditional ETL.
I want to understand --why can we not get the best of both world --i.e Stream data as it does send me any changed rows in source(that would replace that lengthy extract process)..and then take it --Transform it and Load it..as a part of apply process..problem is limitation of the Stream -- I guess or knowledge base avaliable on this..FOr example --I have spent all most 2 days (unsuccessfully) in finding a way to get column values from LCR -- logical change records from the source--no oracle manual says -- how to access it -- or at least can I do it or not.
I still think for real time datawarehousing Stream is a very good technology + traditional PL/SQL's..we can do it if I can get the individual column attribute --I am still doing a design of experiments and we don't have a decision taken on it --but it is a cool feature.
I am not an expert and kind of a developer so your thoughts and suggestion always help us.
--Here is my Question on LCR:

Suppose I have a table EMP and I want to capture any dml change in this table --fine I can do it using Stream capture..I can get LCR as a part of Stream...Now I want to Load all such LCR's in a new table EMPLOYEE_DIMENSION --
and before that I want to do some ETL say --for example -- salary column of emp table need to be changed to salary*10000/5 based on dept on so.
My Problem is I don't know how to get LCR.salary column from the LCR -- stream gives me..

Any help would be a real great help.
---


Thanks
Ajeet

Tom Kyte
May 21, 2004 - 10:56 am UTC

not a limitation of streams, a DIFFERENT technology.

old technology:

o dump
o stage
o etl
o stage
o load

new technology:

o capture change
o etl change
o apply change


old tech: batch process, N times a time at fixed points
new tech: stream, always on, 'real time' type of feed, trickle feed


you can stream into a stage table (as described) so you could:

old technology with a twist:

o stream into stage
o etl
o stage
o load


did you read about the dbms_streams package in the supplied packages guide?




Thanks ---but How to get values from LCR

Ajeet, May 22, 2004 - 8:28 am UTC

Thanks Tom,
I am still searching for a solution to get individual attributes from a LCR..Yes I did read dbms_stream and all most all related packages in supplier guide--every one of them tells me how to convert a non LCR message to LCR but I want --what I said.I will keep reading and searching..Postion this here if I can get some help from you
or any one else.

Thanks
Ajeet

Tom Kyte
May 22, 2004 - 5:19 pm UTC

the LCR is just an XML glob, the schema is defined in appendix A of the streams guide.

Thanks ---but How to get values from LCR

Ajeet, May 22, 2004 - 9:15 am UTC

Thanks Tom,
I am still searching for a solution to get individual attributes from a LCR..Yes I did read dbms_stream and all most all related packages in supplier guide--every one of them tells me how to convert a non LCR message to LCR but I want --what I said.I will keep reading and searching..Postion this here if I can get some help from you
or any one else.

Thanks
Ajeet

Thanks -- It worked

Ajeet, May 23, 2004 - 4:49 am UTC

Tom,
Thanks for pointing out several times to supplied package guide and streams --guide .Infact I was able to parse LCR'and perform the Tramsformations -- still testing performance and other things.
In fact I want to post complete sql and process which I have followed on this --can I do this !!

Thanks
Ajeet

Tom Kyte
May 23, 2004 - 9:37 am UTC

please do, yes.

Sample examples -- for Streams

Ajeet, June 14, 2004 - 6:10 am UTC

Sorry for late in posting this:
I will post 2 cases where I have applied Streams

case 1) Streams DML/DDL changes from one database to another -- NO transformation applied -- Target database uses a Trigger (row level) to apply required transformations on streamed data -- so my stream part ends here.
case 2) It tries to apply a transformation using a LCR --during the apply process --did not work because it was not .fast enough to meet my requirements.

For Syntax /semamtics and setup information -- Please refer to Stream guide of 9i at OTN.
</code> http://download-west.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-JPU <code>

Here is the steps we need to follow :
1)create a Stream queue at source database.
something like this :
begin
dbms_streams_adm.set_up_queue(
queue_table => 'rdmt',
queue_name => 'rdmnewq',
queue_user => 'ops$ngta3zt');
END;
/
2)Then add a table propagation rule --for the table where we have to stream DML/DDL changes :
begin
dbms_streams_adm.add_table_propagation_rules(
table_name => 'ops$ngta3zt.FLT_SAGE_ENG_OUTPUT',
streams_name => 'rdmnewstr',
source_queue_name => 'ops$ngta3zt.rdmnewq',
destination_queue_name => 'ops$ngta3zt.rdmnewq@evnlabl2',
include_dml => true,
include_ddl => false,
source_database => 'evnlabl1');
end;
/
3)You have to have a private database link from source database to target database.
in my case evnlabl2 is the db link name.Global_Rules has to be True.
4)Then define Capture rules..
begin
dbms_streams_adm.add_table_rules(
table_name => 'ops$ngta3zt.FLT_SAGE_ENG_OUTPUT',
streams_type => 'capture',
streams_name => 'rdmnewcap',
queue_name => 'ops$ngta3zt.rdmnewq',
include_dml => true,
include_ddl => false);
end;
/
Then instanitae the table which need to be streamed...using export/import -- you will need oracle 9i SQLPLUS
exp userid=ops\$ngta3zt/xxxxx@evnlabl1 FILE=jobs_instant.dmp TABLES=FLT_SAGE_ENG_OUTPUT OBJECT_CONSISTENT=y ROWS=n

imp userid=ops\$ngta3zt/xxxxx@evnlabl2 FILE=jobs_instant.dmp TABLES=FLT_SAGE_ENG_OUTPUT IGNORE=y COMMIT=y LOG=import.log STREAMS_INSTANTIATION=y

6)Then go to Target database --

Create the queue there as well..
begin
dbms_streams_adm.set_up_queue(
queue_table => 'rdmt',
queue_name => 'rdmnewq',
queue_user => 'ops$ngta3zt');
END;
/
7)Define a apply process and Start it
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'ops$ngta3zt.FLT_SAGE_ENG_OUTPUT',
streams_type => 'apply',
streams_name => 'rdmnewapp',
queue_name => 'ops$ngta3zt.rdmnewq',
include_dml => true,
include_ddl => false,
source_database => 'evnlabl1');
END;
/
begin
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'rdmnewapp',
parameter => 'disable_on_error',
value => 'n');
end;
/
Start the apply process:
begin
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'rdmnewapp');
end;
/
8)go to source dataabse --Start the capture process:
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'CAPTURE_ME');
END;
/
----END of Case Study-------------

Now at target database --you can add a trigger per your requirement -- but you have to do a small setup with Stream in order to this trigger to work..

begin
DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY
('STRMADMIN','FLT_SAGE_TG',false);
end;
/
Otherwise your trigger will not work -- read the reasons in detail in Stream guide.
============================================================

Case 2) is exactly same as case1 except I use a user defined procedure to do some transformation -- this parse a LCR and do transformation -- it performs poor -- may be I don;t know enough about it..

create or replace procedure mes_handler (event in SYS.AnyData)
IS
rowlcr SYS.LCR$_ROW_RECORD;
lcr SYS.LCR$_ROW_RECORD;
res NUMBER;
str varchar2(255) ;
newlist SYS.LCR$_ROW_LIST;
oldlist SYS.LCR$_ROW_LIST;
tn varchar2(61) ;
num NUMBER;
type_name VARCHAR2(61) ;
BEGIN
enq_row_lcr(event);
res := event.getobject(rowlcr) ;
newlist := rowlcr.GET_VALUES('new'); -- gives latest LCR
--
FOR i IN 1..newlist.COUNT LOOP --parses invdividual attributes in a LCR
--
IF newlist(i) IS NOT NULL then
if newlist(i).column_name = 'ENGINE_ID' then
num := newlist(i).data.getvarchar2(str) ; --standard functionality
--v := newlist(i).data.getvarchar2
INSERT INTO ops$ngta3zt.rdm_d_engine VALUES (1, str,sysdate,null,
null, null, null) ; -- Load into target table --you can use anything here any SQL procedure etc to do ur ETL.

---END of the Script....

There are several views --dynamic as well as static which tell you the status,performacne of stream -such as
dba_capture,dba_apply,dba_apply_coordinator etc..

--Setting Parameters like parellelism to a higher degree(provided your configuration of server supports that) can enhance your Stream performance.

--issue -Not able to get better information on Stream other than 9i PDF guide which is ok to know about it but does not resolve many developer kinds of issues we face.

Thanks
Ajeet






Data Capture from Sybase db

Suhail, June 21, 2004 - 4:16 pm UTC

Its a wonderful example of Oracle stream. I have a little different situation, I have Sybase and Oracle databases. There are some tables which are common in both database. I need to capture only the changed data from Sybase tables and load it to the Oracle's tables. How can I do it using Oracle Stream in a real time or through nightly batch? Or is there any better way to do it. I will appreciate your input.

Thanks

Suhail

There is a complete example in stream pdf

Ajeet, July 01, 2004 - 8:20 am UTC

Hi
</code> http://download-west.oracle.com/docs/cd/B10501_01/nav/docindex.htm#index-JPU <code>

This doucment has chapter 22 which talks in detail about this.

Thanks
Ajeet


error with streams

bn, July 29, 2004 - 10:50 am UTC

Hello TOM ,

I get error "ORA-01426 numeric overflow" when trying the following .

begin
dbms_streams_adm.add_table_rules(
table_name => 'scott.rd_stream',
streams_type => 'capture',
streams_name => 'rdmnewcap',
queue_name => 'scott.rdmnewq',
include_dml => true,
include_ddl => false);
end;

Methods set_up_queue and add_table_propagation_rules worked fine.
PLs help

TIA

Tom Kyte
July 29, 2004 - 1:10 pm UTC

what's the exact release there?

error with streams

bn, July 29, 2004 - 10:57 am UTC

Hello TOM ,

Oracle is 9.2.0.1.0.

TIA

Tom Kyte
July 29, 2004 - 1:13 pm UTC

i see an issue with that release -- that was corrected in the 9202 patch set. 9205 is the current patch release

Replication/Streams/AQ

Reader, July 29, 2004 - 2:18 pm UTC

If we need to bring data from one database to another then we can use replication(snapshots , MVs ) ,streams or AQ .

Which is the most appropriate for a situation like this :

Say the database BB will gets a table which is based on 5 tables on database AA .The tables on database AA contains millions of rows.

At present we have snapshot logs on all these 5 tables and use materialized views (refresh fast ) every few hours .
Now in the coming months this technique will not work as the query is getting complex and will not be supported by fast refresh.
A complete refresh is not acceptable because of time constraints .

What could be the solution in this case ?
AQ /streams ?

Thanks ,




Tom Kyte
July 29, 2004 - 2:22 pm UTC

and how would you maintain the "complex" view using incremental changes if we cannot?

therein lie the crux of the problem -- if the view becomes so complex we cannot maintain it incrementally -- how can you?

MV vs Replication

reader, August 16, 2004 - 9:22 pm UTC

Is replication based on snapshots (MV) .

I have the following scenario :

Database (A,817) : table a,b,c,d,e

mv1 is a based on (a,b,c)
mv2 is a based on (b,c,d,e )

on database B which 9iR2

mvb1 is complete refresh of mv1@a
mvb2 is complete refresh of mv2@b


No changes will be done on database B.

ie I use dbms_mview.refresh('mvb1') ;

I now unable to do this as mvb1 and mvb2 have a lot of FK constraints and complete refresh fails .

will replication help ?
What other solutions I have ?

I tried doing fast by creating logs on all but it fails with complex query error or deadlock error.



Thanks ,





Tom Kyte
August 17, 2004 - 7:25 am UTC

alot of fk constraints to what exactly? need a tad more detail.

Replication vs MV (contd)

reader, August 17, 2004 - 8:15 am UTC

Sorry for the incomplete info.

What I mean by FK is that mvb1 and mvb2 have PK's .Many other tables have FK on these tables and a truncate or complete refresh gives the following error .
ora-2266 --unique/primary keys in table referenced by enabled foreign keys .

My other question :

If I have a table which is based on a UNION of two other tables then I cannot do a fast refresh .
What other alternative I have here (other than complete refresh)

Thanks



Tom Kyte
August 17, 2004 - 8:53 am UTC

you cannot do this (think about this, you are asking for sure disaster here!)

A simple child key pointing to the MV would make it impossible for the MV to refresh.  You have the most breakable system on the planet.  A single child table at the refresh site will prevent all refreshes from taking place -- all refreshes.

You do not even want to consider having a fk to a mv, just not workable.

(you can put them into a refresh group, then the refresh is done via "delete" and "insert" instead of truncate and insert /*+ append */, you would have to create the fks using the DEFERRABLE option -- as the refresh process will defer all deferrable constraints.  BUT THIS WOULD BE A HORRIBLE DESIGN that will cause you nothing but heartburn after heartburn after heartburn -- think about it).


feel as if I'm showing you how to cut off a perfectly working nose, despite the face:


ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> alter table emp add constraint emp_pk primary key (empno);
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table dept as select * from scott.dept;
 
Table created.
 
ops$tkyte@ORA9IR2> alter table dept add constraint dept_pk primary key(deptno);
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view log on dept with primary key including new values;
 
Materialized view log created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view mv
  2  refresh complete
  3  as
  4  select * from dept;
 
Materialized view created.
 
<b>pretend dept is remote, emp and mv are local...</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table emp add constraint emp_fk foreign key(deptno) references mv(deptno);
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_mview.refresh( 'MV' );
BEGIN dbms_mview.refresh( 'MV' ); END;
 
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
ORA-06512: at line 1
 
<b>that is what you get now</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view dummy
  2  refresh complete
  3  as select * from dual;
 
Materialized view created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table emp drop constraint emp_fk;
 
Table altered.
 
ops$tkyte@ORA9IR2> alter table emp add constraint emp_fk foreign key(deptno) references mv(deptno) deferrable;
 
Table altered.

<b>that is how the constraint must be defined</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
  2     dbms_refresh.make( name             =>'group1',
  3                        list             =>'mv, dummy',
  4                        next_date    =>sysdate+1,
  5                        interval         =>'sysdate+1/24',
  6                        implicit_destroy =>true);
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_refresh.refresh( 'GROUP1' );
 
PL/SQL procedure successfully completed.

<b>that does both mv's "consistently" in a single transaction (hence no truncate, no append, hence redo = yes, lots, copious amounts)</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from emp where deptno = 20;
 
5 rows deleted.
 
ops$tkyte@ORA9IR2> delete from dept where deptno = 20;
 
1 row deleted.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from mv;
 
    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON
 
ops$tkyte@ORA9IR2> exec dbms_refresh.refresh( 'GROUP1' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from mv;
 
    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

<b>
there are two ways to refresh a mv, incremental (aka 'fast') and complete.  </b>

think about the semantics of a UNION 

a union b is "distinct ( a+b )"

maybe you can use union all?  

MV and Replication

Reader, August 18, 2004 - 12:53 pm UTC

1. Thanks a lot.
2. While doing complete refresh is there any use of creating logs on the table.
3.What you have shown above Replication would have done the same thing but it would have been transparent to me.Correct ?
4.If it is Master /slave replication then is there a difference between oracle Replication vs Materialized view implementation ?
5.do analytical functions come under aggregate functions ?

Thanks again ,

Tom Kyte
August 18, 2004 - 1:13 pm UTC

2) nope
3) yes.
4) materialized views are also known as snapshots and are in fact "replication". MV's in a single database have more features (query rewrite, refresh on commit) but a replicated table can be a MV.
5) analytics are just like aggregates, yes.

Cannot use MV then ..

Reader, August 20, 2004 - 3:36 pm UTC

Thanks for all the clarifications .
I have a situation where I am creating a MV on a local system with 5 million rows which uses 5 tables from a remote system.
Fast refresh cannot be used as the query contains max(date)
Complete refresh cannot be done becuase it takes a lot of time .
Is there any other method I can use to bring the data to this new system .
Is AQ a solution here .

Thanks again .

Tom Kyte
August 21, 2004 - 11:17 am UTC

what is "a long time"

and can you break this into two mv's -- one with simple joins and the other with the aggregate, which can then be joined.

MV refresh

Reader, August 21, 2004 - 9:49 pm UTC

The long time is 20 mins .
The requirement is every 30 mins to 1 hour it has to be refreshed.
I will try to explore the possibility if I can do 2 MVs .One will contain the aggregates which will be Complete refresh.

I was wondering if you could tell us about using some other methods of doing it.

Thanks

Tom Kyte
August 22, 2004 - 8:00 am UTC

the two mv's (and perhaps an mv of the two mv's that does the join) is all I can think of.

a) multi-table joins -- can be done incremental
b) single table aggregates -- can be done incremental

multi-table joins with aggregates -- complete

so, do (a), do (b) and then maybe do "a join b"

view and snapshot

tesloach Gach, December 26, 2004 - 7:05 am UTC

dear tom
i don't know the different between snapshot and view
would you please make it clear to me and thier importance
you

Tom Kyte
December 26, 2004 - 12:47 pm UTC

a view is a stored query. select * from view just puts the sql string in place of view and executes the query. say view as:

select count(*) from five_hundred_billion_row_table;


the view would count the 500 billion rows when you queried it.



A snapshot is a "stored result set". If you had a snapshot that used the same defining query -- it would already have the count in a single row, single column "snapshot table"....



Think of a view like a window -- the view you see it realtime and current. You could take a picture of the window -- the resulting snapshot would be a frozen point in time copy of what you see in that window -- you no longer need to look out the window to see what the "answer" would be.

Replication of Development Database

Vikas Sharma, January 24, 2005 - 10:34 am UTC

Hi Tom,

We have our development going on 2 sites/Location( connected using VPN), We want to use two database so that both site will have their own Local database to access. But we want to synchronize both the database so that if a developer at one site1 changes a database object the changes should reflect in the other database automatically.

Would Replication is a possible better solution to achive this ? Please clear.

Thanks

Vikas Sharma

Tom Kyte
January 24, 2005 - 11:29 am UTC

it would complicate your life extensively as you have to think about "what happens when we ultimately both update the same rows at the same time -- how do we resolve that conflict"

So, you would be designing update anywhere logic into your application. You should rethink this -- having two "local databases". Your design time will increase many fold, your time to implement will go way up, your testing will be much more complex, and your administration needs will be much higher.

A single database is the answer.

Can data stream or logical standby database in physical standby database

Goh, April 21, 2005 - 9:29 am UTC

Dear Tom,

Can we use data stream or logical standby database in our physical standby server. Thanks in advance .

Rgds
Goh

Tom Kyte
April 21, 2005 - 12:01 pm UTC

not sure what you are asking.

LCR is just an XML glob

Adrian Davies, May 25, 2005 - 12:22 pm UTC

Your followup to '...How to get values from LCR' posted May 22, 2004 says that 'the LCR is just an XML glob'

What is an XML glob, and can a LCR be easily converted into XML?

I'm looking into how to pass the contents of LCR messages into a java method (which is loaded inside the Oracle JVM). Getting the LCR in XML format will enable me to pass in the XML as a parameter.


Tom Kyte
May 25, 2005 - 3:16 pm UTC

a glob of data, like a "blob", just a bunch.

'the LCR is just an XML document instance' would be a more official way to say it perhaps.


</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96571/ap_xmlschema.htm#620714 <code>

(you might want to glance at the streams doc in general for details)

Converting LCR to XMLType

Adrian Davies, May 26, 2005 - 9:53 am UTC

Thanks Tom.

Is there a function in Oracle that can convert the LCR (SYS.LCR$_ROW_RECORD) to an XMLType.

You can do this for Oracle objects using the XMLType(obj) constructor or the XMLType.createXML (obj) function. However for SYS.LCR$_ROW_LIST type objects, the Oracle comiler complains about the type:

PLS-00306: wrong number or types of arguments in call to 'XMLTYPE'



Tom Kyte
May 26, 2005 - 10:32 am UTC

not that I am aware of, the data is already all parsed out in that structure.

Converting LCR to XMLT

Adrian Davies, May 26, 2005 - 1:02 pm UTC

It's a shame there isn't a quick way of converting a LCR (SYS.LCR$_ROW_RECORD) to XML.

I will now resort to writing my own PL/SQL code to extract the LCR data using the following logic:

lcr SYS.LCR$_ROW_RECORD;
newlist LCR$_ROW_LIST;

newlist := lcr.GET_VALUES('new');
column_name := newlist(i).column_name;
type_name := newlist(i).data.GETTYPENAME();
IF type_name = 'SYS.VARCHAR2' THEN
column_value := newlist(i).data.GETVARCHAR2(str);
END IF;

Thanks again for your replies.


A reader, November 22, 2005 - 4:03 pm UTC

Tom,

I used DBMS_APPLY_ADM.COMPARE_OLD_VALUES to ignore values other than primary key, how can i undo this i searched docs and metalink was unable to find one.

Thanks.

desc t
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
C1 NOT NULL NUMBER(38)
C2 VARCHAR2(10)
C3 VARCHAR2(10)
C4 VARCHAR2(10)

DECLARE
cols DBMS_UTILITY.LNAME_ARRAY;
BEGIN
cols(1) := 'c2';
cols(2) := 'c3';
cols(3) := 'c4';
DBMS_APPLY_ADM.COMPARE_OLD_VALUES(
object_name => 'scott.t1',
column_table => cols,
operation => '*',
compare => false);
END;
/

Tom Kyte
November 22, 2005 - 4:24 pm UTC

how can you undo what?

A reader, November 22, 2005 - 5:08 pm UTC

Tom,

Sorry for not being clear enough ok after executing that packaged while resolving update conflicts it will be check for primary value only(please correct if i am wrong) i want it to be how it was before executing that package that is for all the values (supplemental logging for table is enabled)

Hope you understand my question.

Thanks.

Tom Kyte
November 23, 2005 - 9:12 am UTC

if your undo retention is sufficient (or you have generated very little undo) do a flashback query.


else, you can fire up logminer and see the compensating DML to put the table back.


I'm not sure what the reference to the "primary value only" means - but if you want the table back the way it was at some prior point in time

o flashback query (if undo exists)
o flashback table (10g - if undo exists)
o logminer (very very very hard)
o point in time recovery (ask your dba)

for example.

A reader, November 23, 2005 - 11:19 am UTC

Tom,

We are not on the same page you are thinking way beyond.

1)
SQL>create table scott.t3 ( c1 int primary key, c2 varchar2(10) not null, c3 varchar2(10))
   /

Table created.

2)
SQL> alter table scott.t3 add supplemental log data (all) columns;

Table altered.

3)
SQL>SELECT OBJECT_OWNER,
       OBJECT_NAME,
       COLUMN_NAME,
       COMPARE_OLD_ON_DELETE,
       COMPARE_OLD_ON_UPDATE
  FROM DBA_APPLY_TABLE_COLUMNS
  WHERE APPLY_DATABASE_LINK IS NULL
  and object_name IN ('T3'); 
  
no rows selected

4)
SQL>DECLARE
  cols DBMS_UTILITY.LNAME_ARRAY;
  BEGIN
    cols(1) := 'c2';
    cols(2) := 'c3';
      DBMS_APPLY_ADM.COMPARE_OLD_VALUES(
    object_name  => 'scott.t3',
    column_table => cols,
    operation    => '*',
    compare      => FALSE);
END; 
/

PL/SQL procedure successfully completed.

5)
SQL>SELECT OBJECT_OWNER,
       OBJECT_NAME,
       COLUMN_NAME,
       COMPARE_OLD_ON_DELETE,
       COMPARE_OLD_ON_UPDATE
  FROM DBA_APPLY_TABLE_COLUMNS
  WHERE APPLY_DATABASE_LINK IS NULL
  and object_name IN ('T3');  

Table
Owner Table Name   Column Name                    COM COM
----- ------------ ------------------------------ --- ---
SCOTT T3           C2                             NO  NO

SCOTT T3           C3                             NO  NO


2 rows selected.

How can i reverse the change that was done in step 4. When i execute query in step 5 I need the output to be no rows selected.


Thanks. 

Tom Kyte
November 23, 2005 - 7:32 pm UTC

looks like it might be like a DEFAULT for a column - once you have one, you'll always have one (you cannot get "rid" of a default, you can only make the default be NULL which is the default 'default')


I don't see a way to get rid of them. You can change them, but not get rid of them as far as I can see. You would simply set it to the default behavior (of comparing all columns) using this API

Confused quite between Stream and Replication

Arindam Mukherjee, July 01, 2006 - 4:50 am UTC

Respected Mr. Tom,

Right now I am working on Oracle 9i but with single instance. I like to know very much this technology - Replication and Stream. I have already read from cover to cover of your book “Effective Oracle by Design” but sorry to say I could not find the answer of my question. My question is what I should read now – Oracle 9i Advanced Replication or Oracle 9i Streams to get to know the replication feature in view of current version as well as my future version like 10g. Since you always ask us to read Oracle document, I think it’s better to ask more queries after reading the document. Please suggest me what document I must start reading.
Please donÂ’t ignore this question, may be itÂ’s silly to you, but itÂ’s really crucial to me.

Regards,
Arindam Mukherjee


Tom Kyte
July 01, 2006 - 7:55 am UTC

Streams would be the way going forward for replication in Oracle.

Are the limitations of Streams 9i addressed in 10G

Srinivas Narashimalu, July 06, 2006 - 4:26 pm UTC

Hi Tom,

We had implemented Oracle Streams in my last assignment. It was on Oracle 9.2.0.6. We had lots of problems in keeping the source and target in sync. Oracle had even recommended us to use a "Flow Control" procedure, which is just a pl/sql procedure that checks the number of logical records at the queue in source side and disables the capture process if there is a large number of records to be applied at the target. But this workaround was not sufficient to keep the Streams going. Also we had lots of shared pool memory problems. We were told by Oracle that all these would be handled in 10G, has it been addressed?

Thanks,
Srinivas

Tom Kyte
July 08, 2006 - 10:21 am UTC

sort of "broad" - but yes, streams is now in it's 3rd generation in 10gr2, has more infrastructure to manage it (tools, gui's, whatever).

CLOB column in SYS.LCR$_ROW_RECORD

Andriy Terletskyy, August 08, 2006 - 12:23 pm UTC

Hi Tom,

Please explain how to add a CLOB column to SYS.LCR$_ROW_RECORD Type.

Thanks.

DECLARE
lcr SYS.LCR$_ROW_RECORD;
lob CLOB := 'Test';
BEGIN
lcr := SYS.LCR$_ROW_RECORD.CONSTRUCT(source_database_name => SYS_CONTEXT('USERENV','DB_NAME')
,command_type => 'INSERT'
,object_owner => 'BV'
,object_name => 'APPLIKATION$HI'
);

lcr.add_column('NEW','MENUE',SYS.ANYDATA.CONVERTCLOB(lob));
END;
/
FEHLER in Zeile 1:
ORA-00902: Ungültiger Datentyp
ORA-06512: in "SYS.LCR$_ROW_RECORD", Zeile 0
ORA-06512: in Zeile 11


DML and DDL order of appliance (9.2)

Javier, January 26, 2007 - 9:41 am UTC

Hello:
I have a problem with Streams simple replication:
1. First I make a DDL change definition in one table: I create a new column.
2. Then I make some changes in data for the new column: update table set column=xxx.

In the replicated Database I get an error: "ORA-23308: object 2.30777 does not exist or is invalid" and I think is because the new column change took place after the update command.

Is this the way it happens? Is it a bug?

Thanks.
Javier

Replication or Standby database

Yuna, April 20, 2007 - 12:22 pm UTC

Hi Tom,

We have a project that asking for two exact database, so when there is something happens with one database and the application will transfer automatically to another database.

At the beginning, I definitely thing that we need a database replication. But after reading a little bit, I am doubting my original thought.

1) whether the performance will be affected a lot by replication;
2) we only use one database at a time. whether standby database is enough?

Tahnk you very much for your help!
Tom Kyte
April 20, 2007 - 1:49 pm UTC

replication != failover

dataguard = failover

you want dataguard.

replication

A reader, April 20, 2007 - 2:19 pm UTC

Thank you Tom.

Yuna

Replicate ? Reinvent? Different problem?

Fernando, October 31, 2007 - 12:10 pm UTC

Hi Tom,

I'd like to thank you for this site and all the effort you put to answer questions from all around the world.

After reading your answers about using replication or not I'm still unable to decide which way to go. You seem to consider replication as the last resource while at the same time you advocate for not reinventing the wheel.

So, here is my dilemma:

Site A is 9.2.0.7 DW database (running on an HP Unix machine with very high performance disk arrays) with financial data (two fact tables of 500 million rows each, partitioned by year, low cardinality dimensions in general but one in the order of 20 million rows). Fact tables have surrogated IDs. ETL process runs every night. Query times are fine but user base is growing.


Now a new OTLP app is about to be developed, 10G / application server / j2EE stuff. It requieres READ ONLY - day before financial data, and will reside in another distinct machine (Site B) on the same building.
Query SPEED is top priority (OLTP times).

Day before financial data are composed of account balances (1 million rows - which will have to be calculated from previous month balances and current month account transactions - which is the only detail that we currently have ) , account transactions (200.000 rows) and account's owner data (3 million rows - but only a few thousand updates per day)

My proposed solutions:

--------
Solution 1.

Precalculate on Site A day before account balances (result is aprox 1 million rows) using materialized views (ETL runs each day at 2:00 PM aprox.) plus day before account transactions and owner's data modifications. Then use "read only" replication from Site B. New app will access this now local data through stored procedures.

Solution 2.

Precalculate and insert/merge new data into Site B via dblink, every night (1.2 million rows, plus thousands of modified rows ),rebuild index , purge no longer current data every weekend, app will use stored procedures for accessing, etc.

Solution 3.

Precalculate on site A. Dblink to see MVs from site B.

--------

One last word, management decision to use a pure oracle solution is in part due to my evangelization efforts using your books as support. Thank you again!

Fernando
Tom Kyte
November 01, 2007 - 4:21 pm UTC

Yes, I do say that if you are going to replicate, and I cannot talk you out of it, you better not compound your troubles by writing your own replication software.

I would rule out #3. You do not want to be constantly hitting dblinks. It decreases your availability (if A is down, B is down, if B is down, B is down - B is down more often).

for #2, I don't get the "rebuild bit"

I cannot really comment on #1 and #2 - I don't have sufficient information. I don't know what your ETL involves.

Replicate ? Reinvent? Different problem?

Fernando, November 02, 2007 - 2:42 pm UTC

Thank you very Tom.

'for #2, I don't get the "rebuild bit"'

My intention is to have a "truncate destination, invalidate indexes, then insert append then rebuild indexes" process whenever data, because of its nature, permit this kind of manipulation. For instance, account balance data have to be calculated every day so the trunc/insert/rebuild thing applies.


'I cannot really comment on #1 and #2 - I don't have sufficient information. I don't know what your ETL involves.'

Ok. Sorry.
Please, let me know what would be that info. I woudn't want to clutter this thread with what I think could be relevant for you.

Thanks.

Fernando
Tom Kyte
November 02, 2007 - 5:12 pm UTC

just insert /*+ APPEND */, with the indexes in place, why scan the data N times to rebuild the indexes when you are direct pathing the data load anyway.

the additional information would fall into the category of "that's a new question"...

Replicate ? Reinvent? Different problem?

Fernando, November 03, 2007 - 11:15 am UTC

Ok. Time to check my knowledge about direct path inserts, then. Thank you.

I will go for Solution 2) because it's what I know and have tried and tested, and I would know when the data is in place totally refreshed. Besides, the only difference between 1) and 2) is the insert/merge part across the dblink which would replace "ro replication" and it'll be relatively simple to extend our publishing process to access another database.

Apparently, replication is something a little bit messy for us to try in this case.

Thank you again.

Best regards.

Fernando

Replicate ? Reinvent? Different problem?

Fernando, November 03, 2007 - 12:34 pm UTC

Regarding original question, I've made a mistake: ETL runs each night at 2:00 AM aprox.

And I'd add that it finishes its job in about half an hour. It is a java program accesing data through jdbc.

Air Gap Replication

Rob Beers, November 08, 2007 - 10:18 am UTC

If 2 databases can't be connected physically due to security reasons (government networks) and an air gap is the only choice and the data must be in text format to check for dirty words, do you still recommend replication via Streams?

Tom Kyte
November 09, 2007 - 11:49 am UTC

no, there is NO WAY to replicate.

You'll be building something of your own unique construction.

You might be able to use streams or CDC (change data capture) as a building block, but you'll be writing lots and lots of custom software.

streams vs snapshots?

Anoop, June 25, 2008 - 4:38 pm UTC

Hi Tom,

your answers have been very helpful.

I am doing research to figure out advantages/disadvantages between streams and snapshots, while syncing a remote database to the main database. Could you please reflect on issues like Performance, Setup, Robustness, Impact to current code etc. I know it is kind of general, but have been not very successful in comparing these two technologies, on the above mentioned criteria.

Any pointers/advice is welcome.
Tom Kyte
June 25, 2008 - 5:08 pm UTC

materialized views:

o easy to set up
o materialized view log has relatively low overhead (internal trigger)
o easy to maintain
o source system will be involved in the refresh - does impact resource usage on source
o been around for a long long time (since version 7.0)

streams:

o more involved setup
o can capture downstream, 0% effect on source system is possible.
o more to maintain
o relatively new, but in its 3rd and 4th generations with 10gr2 and 11gr1



replicating from third party database

puredba, June 26, 2008 - 9:29 am UTC

Dear Tom,

As you have said that replication would impact the performance on source database. Then what method should I apply if I have third party database like sybase or sql server (we have only read only right on that database as it belongs to some third party) and we want to replicate few tables on a daily basis. However, I need to maintain only subset of data on the target database Oracle 10G Rel 2 for few tables whereas for others I would maintain full tables. Please suggest the process for best performance n ease of implementation and less maintenance and where would I get more implementation tutorials about them. You may also like to discuss pros and cons of all such alternatives here for all of us.

Please suggest in details.

Thanks in advance.
Tom Kyte
June 26, 2008 - 4:17 pm UTC

both materialized views and streams are replication. So, no, I didn't say "replication would impact...". It depends.


I'm not the right person to talk to regarding heterogeneous replication. You won't be using anything "out of the box" from Oracle to replicate from them.

Re: streams vs snapshots?

Anoop, June 27, 2008 - 2:43 pm UTC

Hi Tom,

Had a follow up question on your response. The requirements for our application might be to support near-real-time data. In that case, we might need to refresh Snapshots very frequently (like every 15 mins or so). That is not a hard requirement yet, but a possibility.

I wanted to check with you regarding a comparison between refreshing (often) a snapshot as compared to using streams. The table might be as large as containing million+ rows. Is there any bench marking example to compare the performance of streams vs snapshots.

Thanks in advance.
Tom Kyte
June 27, 2008 - 3:21 pm UTC

If one approach was always superior - we would not have more than one approach.

the answer: it depends

it depends on the nature of your modifications, totally.

If performance and scalability are important to you (and they should be) you would benchmark under YOUR conditions.

The term "your mileage may vary" comes to mind, the only answer is "it totally and utterly and completely depends"

Advanced Replication, Oracle Streams, Materialized View or Something else ....

BC, October 07, 2009 - 4:23 pm UTC


Tom,

We have a view "a_view" on a 10g instance "a_db" that uses table a, b, c from "a_db" and table "d" from "b_db" ( 10g instance as well ) using a database link. This view is being used extensively all over the place and hence we would like duplicate table d from "b_db" to "a_db".

One of the options we considered is creating a materialized view "a_db", However, we would have to either refresh this mv manually or set it to run at an interval.

This table "d", gets updated frequently during the day, mostly inserts < 10,000 rows per day. but when data is inserted it needs to be available in the view "quickly" as daily business decisions depend on it.

What would you recommend ? Advanced Replication, Oracle Streams, Materialized View or Something else ....

Thanks

BC

Impact of Supplemental log

Santosh Vijayan, December 22, 2009 - 10:18 pm UTC

Greeting TOM,

I have a 9.2.0.6.0 database.
I want to use supplemental logs on tables for changes to be logged. An example is shown below.

ALTER TABLE "MM_OWNER"."PROPERTY_INVENTORY" ADD SUPPLEMENTAL LOG GROUP "GGS_PROPERTY_INVENT_31651" ("PROPERTY_NUMBER","ROOM_CODE","INVENTORY_DATE") ALWAYS;

I would like to know normally what would be the performance impact on a database if supplemental log is enabled for all the tables in a schema.


Thanks

Tom

Tom Kyte
December 31, 2009 - 11:21 am UTC

somewhere between 0% and 1,000,000% typically.

As in, it depends.

It depends on update frequency
It depends on the reserve capacity of your redo devices
It depends on how much free cpu you might have

it might increase the amount of redo generated - that might not affect you at all, it might affect you a lot - it depends on what is happening right here, right now.

This is what I suggest

a)understand what it does (it ensures each update contains enough information to logically identify each row that is modified by the statement)

b) understand what your system currently does

c) hypothesize whether the additional workload in the respective areas will be a negative thing for you based on A and B

Impact of Supplemental log

Santosh Vijayan, December 22, 2009 - 10:18 pm UTC

Greeting TOM,

I have a 9.2.0.6.0 database.
I want to use supplemental logs on tables for changes to be logged. An example is shown below.

ALTER TABLE "MM_OWNER"."PROPERTY_INVENTORY" ADD SUPPLEMENTAL LOG GROUP "GGS_PROPERTY_INVENT_31651" ("PROPERTY_NUMBER","ROOM_CODE","INVENTORY_DATE") ALWAYS;

I would like to know normally what would be the performance impact on a database if supplemental log is enabled for all the tables in a schema.


Thanks

Tom

Impact of Supplemental log

Santosh Vijayan, December 22, 2009 - 10:20 pm UTC

Greeting TOM,

I have a 9.2.0.6.0 database.
I want to use supplemental logs on tables for changes to be logged. An example is shown below.

ALTER TABLE "MM_OWNER"."PROPERTY_INVENTORY" ADD SUPPLEMENTAL LOG GROUP "GGS_PROPERTY_INVENT_31651" ("PROPERTY_NUMBER","ROOM_CODE","INVENTORY_DATE") ALWAYS;

I would like to know normally what would be the performance impact on a database if supplemental log is enabled for all the tables in a schema.


Thanks

Tom

Stream Replication

A reader, January 21, 2010 - 7:29 am UTC

Hi Tom,
Can we run Stream Replication on No Archivelog mode Database.



Tom Kyte
January 21, 2010 - 10:34 am UTC

no

it would never ever make sense to replicate a noarchivelog mode database. The reason: you cannot fully recover from it, you KNOW that someday you WILL (not might) lose all changes from some point in time (your last backup). Therefore, anyone replicating it would have to erase themselves (to forget about the data that never happened, that you just lost) and recreate themselves.

Besides, data in a noarchivelog mode data is data you don't really care about - that is obvious, you've said so by putting it into noarchivelog mode. Since that data isn't useful, there would never be any need to replicate it (why have two copies of useless, not important data?)

Data Synochronization

A reader, February 24, 2010 - 11:53 am UTC

Tom,

I have a question about database design changes in our app.
Current design: A person can work in multiple states.
___________________

Person_id - primary key
ssn_nbr - not unique
state_cd

new design

person_id - primary key
ssn_nbr - unique

child table of person (person_state)
person_id
state_cd

We were able to migrate data easily to the new tables, however we must support apps that use old design or new design (in parallel). What would be the best solution for this problem. Should I be looking into traditional views, MVs or Oracle streams or something else. We are on 10gR2.

Thanks a bunch in advance for your advice.
Tom Kyte
March 01, 2010 - 9:07 am UTC

sort of insufficient data here to comment, you don't give the use cases needed to be covered. But in short, probably

if the new table has a new table

you can create a view that appears to materialize the old table


if the old legacy application doesn't modify data, you are done, they are good to go.

If it does, you'd have to do quite a bit of design thinking - you MIGHT be able to use instead of triggers (I'd hate that) but you would have to draw out all of the use cases.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library