A reader, October 01, 2001 - 1:22 pm UTC
BUT...
A reader, October 01, 2001 - 1:25 pm UTC
but will replication refresh all the data
from the master db? want to be able to keep
the updated table and have the replication
append the new data to the slave.
L
October 01, 2001 - 2:59 pm UTC
replication will only refresh the tables you want to have refreshed.
I asked if the tables at the copy site were "different" -- not the same as -- the replicated tables -- you said they were. Hence replication will work.
Lets say you have the sites MASTER and COPY.
Master has tables t1, t2, t3, t4, ..... tn
Copy replicates tables t1, t2, t3, t4 and has its own tables ta, tb, tc. replication will maintain t1 ... t4 as copies of the data from master, ta, tb, tc will be unaffected.
A typical scenario
Abu Ahmadh, September 17, 2002 - 2:19 am UTC
Hi Tom,
I have two sets A and B. Initially, set B is a copy of set A. Over time, set B gets its own inserts, updates and deletes. Now, I want to create a third set C which will comprise of,
a) all rows from set A
b) all new rows from set B
and separate conflicting rows (may be a log generated), say,
i) rows in set A and set B are different (updated in set B)
ii) rows found in A but not in B (deleted in set B)
What is the best way to accomplish this?
Thanks Tom.
September 17, 2002 - 7:54 am UTC
best to have two columns added to B and maintained via a trigger
o date_created
o date_modified
when you copy A to B_Table (yes, B_table, not B), add these two columns and an insert trigger to maintain one and an update trigger for the other.
then, create a view B as select .... from b_Table (selecting everything but date_created, date_modified). Use the view B as if it were a table.
Now to find what you need is a series of simple queries.
sub-set
A reader, September 17, 2002 - 10:35 am UTC
Is there any way to get sub-set of data from the master database to the replicated database.
Ex. Development instance has only small set of data from production database..but still maintain the integrity across the tables.
September 17, 2002 - 12:42 pm UTC
Replication support different table structure?
A reader, September 18, 2002 - 11:11 pm UTC
Can the structure of a table be different at replication site than the master. Is there any way to replicate such tables? For eg, table A at Master site has col1,col2,col3,col4 and table A at replicated site has col1, col2, col5, col6.
September 19, 2002 - 7:43 am UTC
no
Can this be accomplished with Replication / MView
Abu Ahmadh, September 19, 2002 - 8:16 am UTC
Tom, With respect to my query above, yes, it can be done via triggers if it is small number of tables. But, we have a few hundred tables. Let's say, we have two environments (DB) and they have the same set of configuration tables (few hundreds) initially. Over time, each DB has its own updates/deletes and inserts.
What we want to do is, replicate set A -> set B with some exception.
1) If it is a new row in 'A', then replicate to 'B'
2) If it is an update, then do not replicate. It should tell us that the row is different. (i.e. compare all columns in a given row to find out if the replicated row is updated)
3) If it is a deletion, then do not delete at the replicated site. Rather, it should tell us (may be by primary key) that the row is deleted in the master site ('A').
In simple terms, we need to track changes done at both sides. I hope you understand our requirement.
Would appreciate your solution.
September 19, 2002 - 8:21 am UTC
Sure, I understand that you'll need triggers on the tables in order to maintain these fields so you can see whats new and whats changed.
Otherwise, you cannot tell the difference between a "new" row at A vs a "modified" row at A when you goto compare.
1 table, 100 tables, 1000 tables -- what is the difference? the trigger will effectively be the same for all -- so, automate the building of the trigger (eg: like we did with the "create snapshot log" command -- what does that do? writes a trigger for the table. create your own create snapshot log command)
Tianhua Wu, September 19, 2002 - 10:07 am UTC
When you sue replication, you can use logic to use dbms_reputil.replication_off in you application whenever you connect to the specific site, however you must be careful, if you use other tools and forget to turn the replication off, data will replicate to the other site. Also there are some other issues you need take care of, for example, primary key.
Can't be done!
A reader, September 19, 2002 - 10:24 pm UTC
Dear Tom,
You said, "1 table, 100 tables, 1000 tables -- what is the difference? the trigger will effectively be the same for all -- so, automate the building of the trigger (eg: like we did with the "create snapshot log" command -- what does that do? writes a trigger for the table. create your own create snapshot log command)"
i) Am I right to say that this cannot be done by using any of the existing features in Oracle without manually writing triggers?
ii) Or, can it be partly accomplished with the use of replication to minimize writing of triggers? If yes, how to do that and what type of replication to use?
Appreciate your reply. Thanks.
September 20, 2002 - 7:58 am UTC
i) you could use our snapshot log if you like. It is virtually the same concept, they just use another table. for every insert/update/delete a log entry will be made into a snapshot (AKA materialized view) log table.
ii) see i however that just *writes a trigger* for you (as I am suggesting you do)
Oracle Utility
Abu Ahmadh, September 24, 2002 - 8:05 am UTC
Hi Tom,
Thanks for sharing your knowledge so far. To achieve our requirement, we should be able to do a "column by column" comparison for a given table across two schemas to see if rows are different. I heard my colleague saying, there is such utility exists in SQL Server. Just wondering if Oracle has one so that we don't reinvent the wheel.
Thanks again.
September 24, 2002 - 3:26 pm UTC
See the DBMS_RECTIFIER_DIFF package found in the supplied packages guide.
Snapshots without delete
Harri, November 26, 2002 - 6:23 pm UTC
Hi Tom!
I have almost same situation as original questioner here.
I must replicate few tables from master-db to archive-db
so that only new and updated rows are replicated.
When production db is cleaned(old rows deleted), archived rows in archive db must stay.
Refreshing need is only once a day.
One important thing is that all replicated tables must be consistent - refresh must be done in one transaction.
I think 'refresh group' is decision for that.
Is this possible to do with snapshots(MV)? Or is there better way?
-Harri
...
harri, November 26, 2002 - 7:53 pm UTC
Thanks for the very quick response! It's appears especially amazing to me because here the time is 2 AM!
What do You say about triggers in this case?
If I use triggers to insert and update rows to another table in master-db and then do something like this in archive-db:
insert into arch_table
select * from trigger_dest_table@dblink
where last_modified>(select last_modified from arch_table);
November 26, 2002 - 9:30 pm UTC
You could use a snapshot log to capture inserts/updates and use the mlog tables "manually" to do your own refresh by primary key
Might be a bit of a less burden on the source system that way.
Update data from 2 separate database
Zo A., April 17, 2003 - 5:41 am UTC
Hi Tom,
Your response from the above question was great, Thanks a lot. Now, we have Oracle 8.0.5 on a Server Unix in place 8A, and Personal Oracle 7.3 on a standalone PC in place 7B. They are using the same application (same table ...) and have their own data.
1) I need to update table A,B,C from 8A to 7B each time a new record has been updated or inserted in 8A.
2) 7B needs to update table X,Y,Z in 8A without overwriting all records which are created in 8A.
I'm thinking of extracting data into an ascii file and make a CTL format so that I can retrieve data by the use of sqlloader. Exchanging those CTL file via MS Outlook. What do you think ? How to achieve that ?
Thanks
Zo
April 17, 2003 - 10:39 am UTC
sounds alot whole lot like an updatable snapshot -- read the replication guide and give some SERIOUS consideration to using software that is supported and supportable given you are still IN DEVELOPMENT.
Further clarification
Rob, April 17, 2003 - 8:39 pm UTC
Tom:
When you say:
>MV's and all forms of Oracle replication prior to Oracle9i >Release 2 will not do what you ask.
>They will ALWAYS propagate the delete, no way to avoid it.
Does this mean that if the base table (the table with the MV log defined on it) is truncated, that there is no way to avoid the having the truncate passed onto the MV that is refreshed from that MV log.
We have a situation where a week's worth of records are processed in a work table, then we want to refresh the MV so that it contains all of the work table's rows, most of which are new but a few of which are updates, then truncate the work table and start processing next weeks rows in the work table. Obviously we don't want to truncate the MV.
Does it sound like streams are what I should be looking at? We are on 9.2.
April 18, 2003 - 10:23 am UTC
streams, yes.
replication, no.
replication is by very definition "mirror image". It would be an error for the replicate NOT to look exactly like the primary.
streams allows you to capture "logical change records", and apply what you want to from that (eg: apply inserts, updates, ignore all others)
Best way to replicate a schema
Tatiane, July 13, 2003 - 10:38 am UTC
Tom,
I am working on a project where I have to design how to update a "mirror" site for read-only access. It seems a simple configuration to me. I would like to have your opinion on it. Maybe I am missing something. The environment looks like this:
o We have a production database with Oracle 8.1.7 and HP-UX
o Our customer has the same version of Oracle and they have Solaris.
o We are connected through a WAN link (our bottlenet is something like 50 Mbps). We are in different cities. So, this is not a reliable nor a fast link.
o The customer wants to have only one schema updaded where they would like to have read-only access to build reports using their local mirrored database.
o So, we would not have to replicate all the production database, only that specific schema.
o Staleness is tolerated. They want the mirrored site to be updated on a daily basis, like during the night.
o Since the production tables are large, we would like to send them only the updated data.
o The updated data is relatively small.
My solutions:
o I think I would have to do a nightly batch update on that site.
o Standby databases and applying redo logs do not seem to fit here, since I want only one schema from a huge production database -- for querying purposes.
o I am new to Oracle replication, but from what I have read, updating an entire schema online via a slow and unreliable link is not a nice solution. Or am I wrong here and Oracle replication would be a fine solution ???
o What I have thought of as the best answer would be a solution that would generate incremental exports from that schema and do a nightly batch update on the mirror site.
o I would write a set of shell scripts to transport, test, re-trasmit, log and notify the operations. But my feelings tell me this control would be a bit cumbersome (do not reinvent the wheel, Tatiane!), and maybe something like Oracle replication would have built-in features to deal with that. What do you suggest here ???
o I have heard incremental exports are deprecated, right ???
What would you recommend is such a situation ?
Thanks again, Tom !
July 13, 2003 - 1:34 pm UTC
This sounds like read only snapshots - where each individual snapshot would be refreshed independently (reducing the risk of you being almost complete refreshing 100 tables and failing due to a network error).
incremental exports export the ENTIRE table if a single bit changed. they do not export just changed rows.
create snapshot logs on production, read only snapshots on the copy site (sans RI -- you can have indexes and such but no foreign keys so you can refresh each snapshot one by one)
Doubt on terms
Gillian, July 14, 2003 - 8:57 am UTC
Sorry, what do:
1) Sans
2) RI
mean above ?
Isn't "sans" a french word ?
July 14, 2003 - 9:26 am UTC
sans means "without"
ri is referential integrity.
Sans RI
Mike, July 14, 2003 - 9:06 am UTC
"Sans" means "without"
"RI" means "Referential Integrity" (via foreign key constraints, etc)
RI
A reader, September 18, 2003 - 8:44 am UTC
Hi Tom,
Can we have RI on the copy tables defined? If we take care to refresh in the right order? OR if RI are defined, disable them and recreate them afterwards? Can we have new RI defined on the copy table than the ones on the original.
I was also kinda wondering if we can use the REFRESH ON COMMIT, clause to synchronously replicate? I know it's a bad idea, but is it possible?
Thanks
September 18, 2003 - 10:44 am UTC
put the RI on with "deferrable initially immediate"
you can do this with deferrable constraints.
refresh on commit works in a single database, not over databases.
Thanks
A reader, September 18, 2003 - 11:56 pm UTC
Dear Tom,
I was reading about establishing referential integrity over multiple databases, in JonathanÂ’s book. It seems the only possibility is either to seriously compromise the concurrency or to leave behind a small window where RI can be violated.
WhatÂ’s the best way to establish RI over multiple databases? There cannot be easy solutions as this has to work over network, but, do we have something in the newer versions of Oracle 9.2 or 10G?
Thanks
September 20, 2003 - 4:55 pm UTC
over my dead body would they give us this dubious "feature".
I would state categorically that you want one database for these tables -- not two. All you would achieve with two is:
o infinitely decreased application availability. The failure of either database impacts the other database 100%.
o infinitely increased runtimes. it would be what is known as "slow"
o infinitely increased resource utilization.
this screams "single database".
loosely coupled systems can live in an distributed environment.
tightly coupled -- never, not in a million years.
forget about concurrency and data integrity for a minute, there are much more serious things to worry about.
Thanks
A reader, September 22, 2003 - 12:14 am UTC
Dear Tom,
Thanks for your very clear directions. The requirement we have is to integrate a leagacy Oracle appliction with an additional application. The newer application would be on a newer box. (the older one cannot take additional load) The thing to go for is a single database on a shared disk array, and let the old and the new machine operate in a RAC style and share database. This seamed some radical change in the environment, so some old style solution was being thought. Looks like we are clear on this, now.
tables in synch
mo, June 12, 2004 - 1:40 pm UTC
Tom:
Can you assist me on the right direction to accomplish this: I have two different databases A and B. I need to have a copy of two tables fron database A in database B. If a user inserted/deleted/updated a record in these two tables in database A then it will also occur at the same table in database B. I like to accomplish this without using triggers and database connections.
1. Do I use replication/snapshots/??? here. DO I just need to read the docs on th above link?
2. Is it difficult to implement by a developer? Do I need a DBA to set it up?
3. Do I need special tools for it (i.e Oralce replication manager) or I can use SQL*Plus?
Thank you,
June 12, 2004 - 1:45 pm UTC
if you rule out triggers and dblinks, you have only one choice.
streams (9ir2)
well, two choices
streams
do it yourself using magic (sneaker net)
streams is documented -- see the list of docs for 9ir2 or 10g if you are using that version.
Otherwise, you will want to rethink the "no triggers", "no dblinks". The easiest way to do this by far will be:
a) create materialized view log on TABLE_IN_A (on database a)
b) create materialized view table_on_b as select * from table_in_a@dblink (on database b)
materialized view
mo, June 12, 2004 - 11:30 pm UTC
Tom:
Do you cover the user of materialized view log and materialized view table anywhere in your book to do this? I am trying to find a small example of how these are created.
Use of database links should be OK.
Thank you,
June 13, 2004 - 10:49 am UTC
material view
mo, June 14, 2004 - 4:22 pm UTC
I tried to test this with EMP table.
I ran this in the first instance but got an error.
SQL> create materialized view log on emp
with rowid,primary key(empno)
including new values;
create materialized view log on emp
*
ERROR at line 1:
ORA-12026: invalid filter column detected
Then I did:
SQL> create materialized view log on emp;
Materialized view log created.
In instance 2 I ran this and I got this error:
create materialized view emp_view
build immediate
refresh fast
on commit
as
select * from emp@db_link
SQL> /
select * from emp@pcs_link
*
ERROR at line 6:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
1. Do you know what is causing these errors?
2. Would this also work both ways when if you update the view it will update the table?
June 15, 2004 - 8:13 am UTC
you cannot refresh on commit over a DBLINK
that would be syncronous replication -- which is suppported -- but differently. see the advanced replication guide.
Jon, July 01, 2004 - 5:30 am UTC
Excellent responses, thanks Tom and all reviwers. I think snapshot logs will provide the solution to my problem.
However, I have a question about versions. I am developing on a 9.2 system. The customer is currently running 8.1.7, but will soon be upgrading to 9.2 or 10g. Are there any issues I should be aware of between the different versions, both now and if the customer goes to 10g?
Thanks,
July 01, 2004 - 10:53 am UTC
they are interoperable.
How about data guard?
A reader, August 21, 2004 - 10:21 am UTC
.. I have two different databases A and B. I need to have a copy of two tables fron database A in database B. If a user inserted/deleted/updated a record in these two tables
in database A then it will also occur at the same table in database B. ...
Tom,
I got to see in a forum, about this
</code>
http://forums.oracle.com/forums/thread.jsp?forum=61&thread=260711&tstart=30&trange=15
>
>i was going thr this link for applying the changes on primary database to standby database
>
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96653/log_transport.htm <code>
>would that be ok for recording the changs and having the two database in sync.
---
Can we use this approach too? ... Your thoughts on this will be very useful. Please provide a link if you had already shown an example on this to us ...
and any limitations you see? ,... does it complicate our approach?
Thanks a lot for your consideration
Pravesh karthik
August 21, 2004 - 12:13 pm UTC
you could use a logical standby and just have these two tables "standbyed"
or you can use replication (advanced or basic)
or you can use streams (upon which the logical standby is itself built in a fashion)
A reader, August 22, 2004 - 1:46 am UTC
Tom,
>> you could use a logical standby and just have these ..
so this feature is only in 9ir2 right?
i have 9ir1 - so i have only replication options ..pls confirm
Thanks alot
August 22, 2004 - 8:18 am UTC
9ir1 -> materialized view or master to master (peer to peer) replication.
Replication of a single master table at two sites.
SHV, September 21, 2004 - 2:30 am UTC
Hi Tom,
I have three Oracle 9.2 databases at different locations, say A,B and C.
My master tables are in Database location A. Say I have master tables - emp and empdetails. I create a materialized view log on emp table and empdetails.
My replicated sites are Databases B and C. I create the materialized views in these sites and create refresh groups at both sites for these tables. Do you see any potential problems/issues in replicating table emp and empdetails at the both the sites simultaneously. This is because I feel sometimes the replication does not happen properly and is not consistent.
Regards
September 21, 2004 - 7:41 am UTC
<quote>
This is because I feel sometimes the replication
does not happen properly and is not consistent.
</quote>
do you have ANYTHING WHATSOEVER to back up that "feeling"
Feelings do not count, only facts.
If you have refresh groups
and you refresh refresh groups
then, the refresh groups will be consistent
and it'll happen "properly"
right track?
Susan, January 07, 2005 - 12:38 pm UTC
We're on 8.1.7.4. I'm reading through the Replication documentation now, but, could you clarify that Replication is the right approach for the following situation. T1 in Schema XYZ on Instance A. Multiple schemas on Instance A and Instance B also contain a T1. Schema XYZ contains data that should be in all versions of T1 (regardless of schema or instance). Periodically, XYZ.T1 gets modified (for now let's just worry about DML modifications), those modifications need to be replicated to all T1s. However, we want to allow custom entries and modifications to data on the non XYZ schemas as well. We do not want the custom modifications to be propagated back to XYZ. I was thinking XYZ is a master site propagating snapshots out to all the tables on Instance A & B. Right approach ? Thanks
January 08, 2005 - 3:46 pm UTC
I would be looking to fix the "multiple T1's" in instance A. There is very simply "no reason -- technical or business -- that can exist to make me need multiple T1's"
So, I would have a SINGLE t1 table -- each schema may well have a very different view of what it sees in there and a different set of rows it is allowed to modified:
</code>
http://asktom.oracle.com/~tkyte/article2/index.html <code>
snapshots will not get you anywhere in this case, the snapshots could not have "extra" data (more rows in them). the snapshots would not make sense on site A at all.
Can you get yourself to a single table here?
And why two instances -- what are the update anywhere issues you are going to hit and how do you plan on dealing with them?
dbms_rectifier & global name
Vlado, January 22, 2005 - 1:53 pm UTC
Trying to use dbms_rectifier to compare two tables but I get an ORA-23365 even though
dbavlado@DWDEV.WORLD> select * from global_name@httst;
GLOBAL_NAME
--------------------------------------------------------------------
HTTST.WORLD
dbavlado@DWDEV.WORLD> BEGIN
2 SYS.DBMS_RECTIFIER_DIFF.DIFFERENCES(
3 sname1=> 'DBAVLADO'
4 , oname1=> 'BIG_TABLE'
5 , reference_site => NULL
6 , sname2 => 'DBAVLADO'
7 , oname2 => 'BIG_TABLE'
8 , comparison_site => 'HTTST.WORLD'
9 , where_clause => NULL
10 , column_list => NULL
11 , missing_rows_sname => 'DBAVLADO'
12 , missing_rows_oname1 => 'MISSING_ROWS_DATA'
13 , missing_rows_oname2 => 'MISSING_ROWS_LOCATION'
14 , missing_rows_site => NULL
15 , max_missing => 100000
16 , commit_rows => 100000);
17 END;
18 /
BEGIN
*
ERROR at line 1:
ORA-23365: site HTTST.WORLD does not exist
ORA-06512: at "SYS.DBMS_RECTIFIER_DIFF", line 1628
ORA-06512: at "SYS.DBMS_RECTIFIER_DIFF", line 1219
ORA-06512: at line 2
What am I doing wrong?
January 22, 2005 - 4:30 pm UTC
are you using replication?
dbms_rectifier & global name without replication
Vlado, January 22, 2005 - 7:18 pm UTC
I'm not using Oracle's replication feature. Does that mean I can't use dbms_rectifier to compare two tables in two different databases?
January 23, 2005 - 10:01 am UTC
<quote>
The DBMS_RECTIFIER_DIFF package provides an interface used to detect and
resolve data inconsistencies between two replicated sites.
</quote>
in fact, the supplied packages guide says "to find out how to use this -- read the replication guide"
</code>
https://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html <code>
shows how to compare any two tables using SQL
Streams, ldg, change data capture or updateble mviews
Gabriel, May 05, 2005 - 3:14 pm UTC
Hello Tom,
I have two databases A and B. A is 23/7 and B is 1/7 i.e. A is up 23 hours per day and B is up when A is down. B consists of 9 tables that exist in A. We have to replicate the changes in these tables from one site to another as they become available. The goal is to minimize maintance of the solution, it doesn't really matter what the initial setup cost is.
What replication or pseudo replication solution would you choose from:
Stream, ldg, change data capture, updatable mviews, mlogs + PL/SQL, or something else?
Thank you very much,
May 05, 2005 - 5:58 pm UTC
can B be up 24/7? can we use it the other 23 hours?
and what is the goal here -- why 23/7 and 1/7? what are you doing that puts you into this situation?
Followup
Gabriel, May 09, 2005 - 8:32 am UTC
Hello Tom,
B can be up 24/7. The 23/7 is because appliaction memory leaks, we have to restart the server almost every day. During that 1 hour we have to point a critical module of the application towards these ~9 tables. At the end of the 1 hour outage of A all changes from B must be pushed (or pulled) to A.
Thank you
May 09, 2005 - 9:13 am UTC
you would implement complex replication rather than fix a memory leak?
Project Management Decision
Gabriel, May 09, 2005 - 9:34 am UTC
Hello Tom,
I know it seems weird, but the application team has been trying for the last 2 years to fix this memory leak and here we are, implementing yet another workaround. The decision is taken by the project management team and the DBAs don't have much say in that. I guess a cost estimate was done and the decision was that this workaround would be cheaper than the fix.
Thank you,
May 09, 2005 - 9:50 am UTC
Ok, why do you have to shutdown a database to fix a memory leak in an application though?
Server shutdown
Gabriel, May 09, 2005 - 10:43 am UTC
Hello,
They have to reboot the server.
Thank you,
May 09, 2005 - 10:56 am UTC
why not just stop the application?
rebooting the server to free up memory?
is this a windows OS memory leak -- or a leak in the application? I cannot imagine rebooting a server to fix an application memory leak?
UNIX server
Gabriel, May 09, 2005 - 11:38 am UTC
Hello Tom,
This is a UNIX server. The DBAs are not always informed of the neety greety of the application. All that we were told is that we have to find a solution to have these 9 tables available 24/7 even though the main db server is shut down every night for memory leak work around. This is a very complex application using ~40 third party software packages some of which are no longer supported etc. etc. I don't know exactly which one of the 40 creates the problem or if it is the code written by the application team. All I know is that implacably the server will go down for 1 hour almost every day. At this time I have to allow the other app module to access these 9 tables. The decision was taken on a cost base approach and not on a technical approach. In fact no DBAs were consulted during the decision process.
May 09, 2005 - 11:46 am UTC
implementing update anywhere replication (which is what you are asking for) without downtime is not possible.
You would have a period of time where the same row(s) could be updated on either machine (and how do you get this application pointed to the other machine? where does this application "live"???). You CANNOT reconcile this in a 3rd party application.
You have sequence issues and all kinds of stuff. Replication cannot be logically considered.
This is not a workable solution.
Where does the application "live".
15 minutes allowed downtime
Gabriel, May 09, 2005 - 1:24 pm UTC
Hello Tom,
I realized I forgot to mention to you that 15 minutes at the beggining and edn of the 23 hour are acceptable downtime of the application. Example:
22:45 application goes down on A (where it lives most of the time)
15 minutes time allowed to push/pull the latest changes to B.
23:00 application goes live on B
23:45 aplication goes down on B
15 minutes time allowed to push/pull the latest changes to A
24:00 application goes live on A again.
Thank you,
May 09, 2005 - 2:45 pm UTC
let me see -- it can be down for 30minutes, but not 60?
and since A must be booted in 45 minutes (23:45 B can push to A... so A must be up)
I'm confused -- all of this for 15 minutes? Really? Sorry, this just isn't worth the bytes is it?
(can you tell I think this is a really bad idea? replicating an entire database for 15 minutes of shaky availability, the increased workload you add, IF it even works (adding triggers to unsupported 3rd party code) -- this is going in the wrong direction)
upgrade in replication
abc, May 12, 2005 - 6:51 pm UTC
Hi Tom,
I have 2 databases 1. Adev 2. Wdev
Adev is master .
Now I need to upgrade these two 9.2.0.5 from 8.1.7.4
Please sugegst Shall I first upgrade Adev then Wdev ?
How to stop replication ?
I am confused.please suggest
May 13, 2005 - 9:04 am UTC
read this document from start to finish.
</code>
https://docs.oracle.com#index-MIG <code>
test it multiple times
and consider 10g will be supported for 2 years longer than 9i, has been out and about for 1 1/2 years..........
Integrate a leagacy Oracle appliction with an additional application
Ricardo PatrocÃnio, September 29, 2005 - 1:43 pm UTC
Hello Tom,
We have to integrate a leagacy Oracle appliction with an additional application. The newer application would be on a newer box ( 9.0.1.4.0).
The legacy application is in DB 7.3 because it was developed in Forms 3.
How can I ensure the referecial integriry of the data in the new box (9.0.1) to the data in the 7.3 database?
Thank you in advance,
Ricardo
September 30, 2005 - 8:35 am UTC
you cannot. there is no such thing as "distributed RI"
wow, integrating the entirely unsupported with the simply "not supported" and doing new development?
keeping 2 tables in sync
Shiju, October 26, 2005 - 4:55 am UTC
Tom,
I have a table T in two Databases (say DB_A and DB_B). I can do insert/delete/update on T from both Databases and I want to keep T in sync in DB_A and DB_B.
ie, IF I insert into T from DB_A , that data should be there in DB_B also and If I insert into T from DB_B , that data should be there in DB_A also. Same is the case for update/delete.
What is the best way to achieve this?
Thanks
Shiju
October 26, 2005 - 12:02 pm UTC
best way in my opinion is to combine database A and B and not do replication as you have to really design to replicate, you'll have update conflicts, you'll have extra stuff to administer.
Your choices are:
a) don't do this (my choice)
b) advanced replication
c) streams
streams is the stated direction of Oracle going forward, you might want to investigate that first.
</code>
http://www.oracle.com/pls/db102/portal.portal_db?selected=3#index-STR <code>
Is replication the right answer?
RP, December 12, 2005 - 1:00 pm UTC
Hi Tom,
my new project has a strange requirement. Its a global customer application but German data needs to remain in Germany.
I was thinking (and would like to know if its possible) to have 2 databases (one in Germany, one elsewhere) and use replication to keep them in sync. Replication would NOT be used for the customer detail tables.
In the java layer, a decision is made which db to use (probably via which account is using the app) and routed appropriately.
Is this the best solution? Is oracle replication easy to set up in a scenario like this?
Thanks
RP
December 12, 2005 - 2:15 pm UTC
my first question is "why". As someone that has worked on global consolidation - I know that there are no legal requirements for this.
So, question is "why" and who is going to pay for this (hopefully the person that remains sure this is a true requirement).......
The best solution is of course, one database. I know for a fact it can be done, we've done it, we do it.
More info
RP, December 12, 2005 - 5:10 pm UTC
Hi Tom,
ok, ok, I lied. The country is not Germany its Switzerland and the application is a banking one. Perhaps then you might understand the strange legal requirements.
Can you comment on the likely technical solutions?
Thanks
RP
December 12, 2005 - 5:29 pm UTC
we have offices in Switzerland, we took their data center away too and moved it to the US.
I'd just put the entire thing in Switzerland than if they believe they need to own their data.
Replication is going to necessarily complicate things - update anywhere - ugh, it'll really make things complex. I would think 5000 times before going down the path of replication.
voice of reason missing from the docs
reader, May 03, 2006 - 5:01 pm UTC
"The database does not permit declarative referential integrity constraints to be defined across nodes of a distributed system. In other words, a declarative referential integrity constraint on one table cannot specify a foreign key that references a primary or unique key of a remote table. Nevertheless, you can maintain parent/child table relationships across nodes using triggers."
If it weren't for those words nevertheless in the 10.2 docs, than maybe people wouldn't take implemeting RI via triggers as a worthy challenge to undertake.
The Best
Hawk, May 05, 2006 - 2:42 pm UTC
Tom,
As always your answers are very helpul.Its a lot of fun and knowledge to read asktom threads.
We have 2 databases which do not have similar datastructures
but we have to still refresh them every second.
Both databases reside at different geographic locations.
scenario
DB-1: one wide large table with approx 495 columns
(I know this is against rules but there is nothing much I can do about it)
DB-2: 14 - 15 diferent tables (with 15-20 columns in each table)
My requirement is that whenever a change is made to DB-2, DB-1 should be synchronized
A program does this currently using temp table 24/7 but it gets very slow at times and also it breaks very easily.
On some days ther are 100's of thousand rows that need to be syncronised. There are some procedures (for decodes or so)required to run when syncronising
I want to re-architect/re-write this program, hence wanted your advise and input on this.
May 05, 2006 - 3:04 pm UTC
The way to fix this:
apparently you all really meant to have ONE SINGLE DATABASE.
this screams consolidation - there is no good solution for what you describe. It'll always be fragile, it'll always be a mess, it'll always be slow, it'll always be a pain.
2 DB to keep in-synch but not Master-Slave
Jean, November 01, 2006 - 6:05 pm UTC
I have 3 DBs (Oracle 9.0.2.7); DEV, TEST, PROD. The Application (App) system is Win32 against an Oracle DB for data and a File Server for several standard documents that are developed and retained as part of a Job. We develop and unit test the App and DB changes (SQL scripts) on DEV; then do installation (SQL scripts, etc.), integration and acceptance testing on TEST. Once everything is tested, install on PROD for about 50 users.
Now, because of the complexity of an upgrade to the DB and App, the customer wants to add, after normal acceptance testing, Operational Testing (OT), meaning: maintain old App and PROD for most users; have new App and TEST for special users to start and work real Jobs, but keep the data in-synch so that the special users can work/complete a Job using old App and PROD if new App and TEST has issues. Jobs can take days to months to complete.
Each Job is keyed on max(Job_ID). PROD has 20 tables, with various triggers and stored procedures, supporting a Job using Job_ID as either the primary or part of a composite key. TEST will have 40 tables (PROD's 20 (modified, same or deleted) + new).
I need to get the correct Job_ID no matter which App/DB the next Job_ID is created. I will also need PRODs new/updated data on replicated to TEST (nightly maybe). Slower performance during testing is acceptable. Mainly I don't want any major new App recoding between OT and PROD installation. I also have the merge of OT data on TEST as part of the installation on PROD to accomplish, but there are several ways to move that data.
My brute force approach was to use db links with triggers/stored procedures for the modified tables and views for the unmodified tables. But I am really stuck on keeping Job_ID in-synch between TEST and PROD when a new Job_ID can be created on either one unless we make a major new App code change for OT.
Hopefully I have included enough information for you to provide suggestions for keeping TEST and PROD in-synch during OT. I am more than willing to learn more about using replication or streams or what ever, just need a path if my brute force method is way off course or there is something easier.
I am amazed when I read the chains of questions/answers at your ability to understand our scribbling so quickly.
Thank so much for your helpÂ…
November 01, 2006 - 6:39 pm UTC
sorry, but this is too much to address in a review followup sense. this is a big "design" thing
keeping two different (different code, one assumes different logic, perhaps even different physical structures) "in sync" is taking things to a level of complexity beyond ......
sorry - this doesn't make sense to me to do, the amount of work involved in replication - let alone replication across disparate systems (they must be, else you would not be thinking of doing this!) hurts.
Thank you
Jean, November 02, 2006 - 10:11 am UTC
Thank you for your time and thoughts which are actually helpful. I sometimes get stuck in a possible solution. Now I am thinking about other options and approaches that won't require as much "synching" or be as transparent to the user, such as a brand new app that will transfer a Job's data upon request.
Again, thank for your time.
How to protect old data in case of remote table empty sometimes
A reader, November 18, 2006 - 1:24 am UTC
Tom,
I am using Materialized views to sync up the remote database tables in local database.
Lets say, at the time of MV refresh, for some reason remote database table was empty (very rare) or during halfway MV refresh got some error, Is it possible to protect the local database MV old data under these circumstances.
The only way i am thinking of is, to create a backup copy of the current MV data into another table.
Do you have any other method.
Oracle 9.2.0.6
Is it possible to protect current data in the Materialized View when the
A reader, November 21, 2006 - 5:15 pm UTC
Tom,
Can you reply to the above question.
November 22, 2006 - 4:09 pm UTC
materialized views only goal in life:
replicate the data that exists.
if the data does not exist, then that is what it will replicate.
think about it, why is an "empty table" "something special" - to a replication process.
A reader, November 22, 2006 - 4:25 pm UTC
<< think about it, why is an "empty table" "something special" - to a replication process.
Agree with you that Mv are perfect synchronization.
Remote table cannot be empty, not possible by requirements. Just in case be proactive. thats all.
November 24, 2006 - 2:43 pm UTC
why did you say:
<quote>
Lets say, at the time of MV refresh, for some reason remote database table was
empty (very rare)
</quote>
then? if you say "remote table cannot be empty".
Keeping 2 tables in synch
A reader, February 14, 2007 - 7:02 pm UTC
Tom,
I need your help in creating a trigger that keeps 2 tables in synch. Each time there is an insert, update or delete on table1, the other table(table2) should be inserted, updated or deleted. All columns in table 1 should be kept in synch with the columns in table2. For doing this can %rowtype be used? I would appreciate it if you can provide an example. The stucture of both table 1 and table 2 is exactly the same.
Thank you
February 15, 2007 - 11:28 am UTC
here is how to do that:
drop other_table;
create view other_table as select * from base_table;
do not do this any other way.
archiving DB
Alex, February 19, 2007 - 12:31 pm UTC
Dear Tom,
As a reviewer Harry,
I must replicate the tables from master db to archive db
so that only new and updated rows are replicated.
When production db is cleaned(old rows deleted), archived rows in archive db must stay.
The master db is 8.1 and the archive is 9.2. I propose to add the proxy 9.2 database "proxy db" and replicate master db and proxy db. For the replication between proxy db and archive db i would like use streams with rules.
What is your opinion ?
Thanks
February 19, 2007 - 2:21 pm UTC
you can use streams if the source database was 9.2 and above - but since you have 8i - it is not going to be easy (eg: it'll be do it yourself)
that concept of a proxy database - I cannot tell you how bad that sounds.
archiving DB
Alex, February 20, 2007 - 3:20 am UTC
Dear Tom,
You mean the proxy db approach doesn't work ?
What are your concerns ? Replication between 8i and 9.2 ?
The one reason is avoiding unnecessary programming.
Thank you a lot.
February 20, 2007 - 9:42 am UTC
I mean the 'proxy db' just seems like a really overkill way to solve a problem. There is no way I'd want to introduce a full up copy of a production instance to do this. You have to manage it, back it up, configure it, care for it, feed it -whatever.
Why not get onto supported software - then you have fewer moving pieces, you are supported, and you can achieve your goal.
Streams good solution?
Nika, March 22, 2007 - 4:20 pm UTC
Tom --
I am not sure if we should use Streams or not, I hope you can suggest something appropriate. Our clinet has a database that is just not designed for performance. The client is not willinng to index, etc this database. So we have basically been copying the data for many many tables into our local database. We have been basically replicating client's db by the means of materialized views and MERGE. At this point it becomes apparent that we have to do it all the time. Does using Streams seems like a good solution here? Our dba thinks that you can use streams just for several tables, not to replicate the whole database. Is that true? If not Streams, then what? The data has to be real time - if something gets updated, deleted, inserted on the client's database, it has to be captured on ours. Any suggestion what be greatly appreciated. Thank you!
March 22, 2007 - 4:41 pm UTC
you can use streams for 1 table, or for all tables.
streams will be more real time (nothing will be real time short of a two phase commit) than a materialized view, yes.
Keeping 2 databases in synch
Bilal, March 27, 2007 - 6:30 pm UTC
Hi Tom,
I have 2 databases, source and target, where I need to keep the target updated (via DML actions from the source) asynchronously (around every 30 mins). However, the issue is these databases are not able to connect to each other directly, via sqlnet. (domain issues).
I was hoping to use Change Data Capture in Asynch mode via copying (ftp'g) the logs to the stage database to perform the rest of the a-synchronizing. Is this possible using CDC? What is the best way, if not using CDC?
Thanks,
Bilal
March 28, 2007 - 11:28 am UTC
if you can ftp, the machines are obviously connected.
so, what is really going on here.
Keeping 2 databases in synch
Bilal, March 28, 2007 - 1:21 pm UTC
I did not clearly state the ftp problem. I actually can not ftp between the source and stage databases. There is a "guard" between the source and stage databases. Once the data passes through the guard I can then retreive that data and send it to the stage database.
Sorry for the not making that clear.
March 30, 2007 - 11:29 am UTC
there are no automated "sneaker net" solutions. You will have to develop your own "extract and apply" solution.
unless you wanted to
a) have 'guard' and source synchronize
b) have target and 'guard' synchronize
that is, "man in the middle", I'm assuming this 'guard' is a machine that can sometimes connect to source and other times connect to target.
Keeping 2 databases in synch
Bilal, April 03, 2007 - 3:33 pm UTC
Thanks for that information. That was my initial conclusion but I was not sure, thanks for the conformation.
Replication, Streams or MERGE
A reader, May 09, 2007 - 1:33 pm UTC
Tom --
This is a great thread! I have to ask for advise though. We are on 9i whereby we have 2 databases. One is our db where we run all of our ONLINE apps from (the data here has to be real time, access fast, etc). Then we have another db that is on client side - this db has alot of the data that we need for our online apps, BUT this db is not optimized for performance. It is slow, very slow. Due to various policies, we cannot optimize remote db, whereby we are forced to load the data needed into our local database, into our local tables (which are indexed properly, etc). We do this by the means of materialized views and MERGE functionality. However, both of these solutions are not real time. We basically run scripts that merge data every hour or so. So, we are now at the point that we need REAL TIME data on our side. There is so many solutions to this, I dont know where to start :) What would you suggest we do? Say we have 50 tables on the remote database. We build the same 50 tables locally BUT index them, etc for performance. What is the best solution for this - Replication, Streams, Data Gurard? To me merging data via a db link for 50+ tables is not very efficient..I might be wrong. Thank you!
May 11, 2007 - 10:26 am UTC
I would start with "blowing up policies"
Seriously.
Is replication can be used in real world applications?
eric gao, June 13, 2007 - 2:56 am UTC
Dear tom
To build an reliable business system,whose data must be distributed,what is the first choice:use replication or immplement it with triggers,procedures ,etc?
Is there any distributed business system you know built upon the replication technology?
Is there any drawback with replication and tips to watch when using it?
Thank tom in advance.
June 13, 2007 - 8:10 am UTC
... To build an reliable business system,whose data must be distributed ..
you have two conflicting goals.
Which do you want - reliable or distributed. Pick one.
replication
eric gao, June 14, 2007 - 1:30 am UTC
hi , tom
Thank you for your fast replying my question!
As to my last question,I want to know if repliaction can be used in real business system as data sychronization technology and if it has successful case in real world applications?
I have asked some body, and they tell me replicaton was rarely used in big bussiness systems,especially when data need to be send bi-directions.Is this true?
Hope for your reply.
June 14, 2007 - 7:13 am UTC
can replication be used - sure, you can use anything.
have I seen it be hugely successful? no
is it easy to implement? no (the design of a bi-directional system is, well, hard. given that most 'database programmers' don't understand simple transaction semantics, I shudder when I think of them designing a bi-directional update anywhere system)
is it easy to maintain? no
are there any 3rd party products you can buy that support it? none that I am aware of
is it the right solution for disaster recovery? never
does it introduce more problems than it solves? in virtually every case, yes.
synch by hand made code -- is this the ultimate solution?
eric gao, June 15, 2007 - 8:18 am UTC
thank you tom!
although it's not a good news to hear from you that replication is not a suitable solution for my problem, i still be happy to know the truth.
can you give me some recommendation on how to realize bi-direction data sych instead of using replication?should i immplement it with triggers,jobs and procedure?
June 15, 2007 - 11:54 am UTC
my point is this:
bi-directional replication suffers from the above list, there is nothing good about it, there is only hard stuff.
regardless of WHO wrote the replication code layer - Oracle, you, them, whatever.
My recommendation is never to go down that path, it is not worth it.
but if you persist in trying - use the stuff that is already written, don't even think about writing your own - that would be even worse, if such a thing was possible.
Partial refresh
Kim, July 26, 2007 - 10:03 am UTC
Hi Tom
We use a MV for replicating a table to another location, over a relative slow network connection.
If the connection dies for period, we can have a hard time making a fast refresh work. Quite often, it dies after some hours. Complete refresh is out of the question.
It would be nice to be able to refresh partially, say in 10000 rows at a time, or fx. all changes in a given timeframe.
I know this is not a current feature, but one I would like to see.
Could we do it our selves by:
1) Locking the master table exclusively
2) Moving most rows from the mv log to another table, and refresh.
3) Move some rows back and refresh, repeating until all are done.
What do You think?
Br
Kim
master - master replication
T.J, September 27, 2007 - 3:14 pm UTC
dear tom ,
i have site A and site B linked via database link
and both have same tables and same excat structure
how do i make syncronus replication between them ..
i.e
any insert , update or delete on site A leads to same insert , update , delete on site B
and
any insert , update or delete on site B leads to same insert , update , delete on site A
September 28, 2007 - 4:51 pm UTC
simple:
you erase one of them, and now you are done.
why would you want to double the workload on each??? Just have one database
what problem are you trying to solve with two "exact replica databases"?
master-master
T.J, September 27, 2007 - 3:33 pm UTC
Sorry tom , i forget this point ..
in case if site A down for any reason . so once Site A comes up again ,site A must able to get the changes on Site B i.e any DMLs has done on Site B while site A was down .
T.J, September 29, 2007 - 3:18 am UTC
it is not like that tom ..
updates on one site depends on the inserts or the updates of the others side ..
so both sides have his own DMLs ..
Plus site A and B are physically Different sites so cant use one Database casue each site has it's owne applications but both sites have few same tables , that i need to have an online synchronoustion between them .
i.e it's not a fully database synchronization. , it's only a few tables synchronization between 2 seperate Databases.
October 03, 2007 - 1:46 pm UTC
you want a single database with a pair of schemas that share a third common schema then apparently.
Replicating data between databases
A reader, January 28, 2008 - 2:50 pm UTC
Hello Tom,
We are looking at options for moving data for one database to another. One of the objectives is to get the data to the target database within an hour. If both the source and target databases are in Oracle, we were considering the option of using Oracle Streams (DB Version - Oracle 10.2.0.3.0).
We have a few instances where the source/target databases are SQLServer 2005 and MySQL 5.x. This being the case what would your recommendation be on moving data from one system to the other?
The suggestions we got from the developer group is to use some kind of Java messaging or use TIBCO and make the whole messaging database independent. The infrastructure, development time and number of resources involved in getting this done through Java messaging or TIBCO seem a lot on the higher side.
We would really appreciate any comments or suggestions you may have on moving data between a) Oracle databases and b) Oracle and SQLServer/MySQL databases.
Thank you
January 29, 2008 - 6:50 am UTC
streams again. Streams can either be used "out of the box" for replication - oracle to oracle, or you can dequeue and apply the LCR's (logical change records) to anything you want by implementing a custom apply process.
Streams *is* messaging, you need nothing else to have message oriented middleware, you can use JMS if you like, whatever.
Any solution then?
Roger, June 02, 2008 - 8:44 am UTC
After reading this entire thread, I get the feeling that there really isn't a decent way to set up or maintain a distributed database system.
I was looking for something that would all me to have 2 databases in separate parts of the world, say on in Australia and another in London, and keep them in sync. We're storing a lot of images in our database and performance for the folks in Australia is a problem if we keep a central repository in London; and vice versa if our central repository is in Australia for the Londoners. So I was hoping to keep a database in Australia and one in London, and have the London team upload their images to the London database (would be better performance because it's local) and then replicate/stream/distribute to Australia, so when one of those chaps wants an image he can select it from the local repository instead of having to fetch it from the London database.
This is possible in other content management systems, like Documentum, but it something like this possible when using the Oracle database as a content repository? And if it is possible, would Streams be the right tool to use?
Thanks
Roger
June 02, 2008 - 11:40 am UTC
it is absolutely possible, it is just really complex to sit down and design and then implement and then manage and manage and manage (documentum or not, that last bit is there)
I strongly "do not suggest" replication because MOST people do not do steps 1 and 2 - design and implement that design, they just try to turn it on and see what happens (a mess happens)
In the year 2008, I would not consider building a content management system. I would buy it from someone, anyone.
I can say here at Oracle we have one file server (it is really a database) for the entire company (from Australia to London to California and all points in between). We store many 10's of terabytes in there (big files, small files). We all use a common data store - no replication. Internally, we replicate pretty much "not much" - it is cheaper and easier to have a good network.
Materialized view replication
Raj, August 19, 2013 - 2:53 pm UTC
Hi Tom -
Some tables in our production database(Oracle 11.2) do not have a primary key. But they need to be replicated to MSSQL for BI processing using MSSQL replication which requires them to have primary keys.
Will the following solution work? It creates a materialized view that selects ROWID from the master table and creates a primary key constraint on it on the mview base table.
The materialized view base table will then be used for replication since it has a primary key.
> create table mytbl (x number);
Table created.
> create materialized view log on mytbl with rowid;
Materialized view log created.
> create materialized view mytbl_mv
2 refresh fast on demand with rowid
3 as
4 select rowid rid, x from mytbl;
Materialized view created.
> alter table mytbl_mv add constraint mytbl_mv_pk primary key (rid);
Table altered.
> select constraint_name, constraint_type from user_constraints where table_name='MYTBL_MV';
CONSTRAINT_NAME C
------------------------------ -
MYTBL_MV_PK P
> insert into mytbl values (1);
1 row created.
> insert into mytbl values (1);
1 row created.
> insert into mytbl values (2);
1 row created.
> exec dbms_snapshot.refresh('MYTBL_MV','F');
PL/SQL procedure successfully completed.
> select * from mytbl_mv;
RID X
------------------ ----------
AAGvQ9AA/AALPbVAAA 1
AAGvQ9AA/AALPbVAAB 1
AAGvQ9AA/AALPbVAAC 2
Can you please point out what can go wrong with the above? I have tested inserts, updates, and deletes from separate sessions and didn't get any errors or inconsistencies. But I am not sure if this would be a supported configuration or if I am missing some case where this could cause a problem.
August 28, 2013 - 5:25 pm UTC
rowids are not constant for a row.
if the DBA does a reorg - rowids change.
if the DBA enables row movement - various operations can change the rowid.
please do not rely on the rowid never changing.
suggestion:
o rename the table T to something_else
o create an editioning view T as select columns from something_else;
o alter table something_else add a surrogate key
o in 11g and before use a trigger (evil, but ugh, sometimes ok) to populate this key with a sequence. in 12c and above use and identity, it'll autopopulate
and base your MV on something_else