Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Robert.

Asked: September 26, 2008 - 2:16 pm UTC

Last updated: October 22, 2008 - 8:02 am UTC

Version: 10.2.0.4

Viewed 1000+ times

You Asked

Hi Tom,

Our Developers want an automatic way to (re)synchronize certain tables in QA environment with production data.

I haven't gotten all requirements yet, but it looks like they are wanting to get the data in sync on QA (after changing credit card numbers, etc.).. then run some tests... then resync the QA data back to current production to test again another day.

What would you recommend for something like this?

Isn't there is a feature using Data Guard and Flashback database by which you can 'break off' the standby database... open it as read/write... modify it... and then use Data Guard and flashback database to get it back in sync as a standby again, correct?

Please advise? and what would you recommend?

Thanks,

Robert.

and Tom said...

This is not going to work - replication and synchronization. So forget the word "synchronize" and rule out immediately "replication". Since you want to modify the database after you mask the data - you cannot use replication, once you start modifying this QA data - replication is over, done. You'd have to start from scratch next time.

What you want is either

a) 11g data guard with the ability to open read/write for testing - you'd need to utilize the data masking pack
http://www.oracle.com/corporate/press/2007_nov/oem-datamasking-ow.html
to 'mask' the sensitive data. Then you can test, flashback, and make it your standby again

b) restore a full backup to a point in time, then utilize the data masking package.





Rating

  (11 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Consider Streams

Chen Shapira, September 26, 2008 - 5:16 pm UTC

If you are talking about a limited or at least known number of tables, and the change volume is not huge, than I'd look into streams.

Streams is flexible enough to allow you to modify some fields before the row is inserted into the target table (using Apply Handler).
Tom Kyte
September 27, 2008 - 11:17 am UTC

and when they change it in QA - boom, start over after you are done.

If you do one way replication, it must be read only - hence this "synchronize" thing makes replication a non-starter.

If they wanted to sanitize a reporting database - that would be one thing. They want to create a testing instance (reads and WRITES) and synchronize it - well, that would have to be two way and that cannot happen here.

a suggestion

Jim, September 26, 2008 - 6:02 pm UTC

We take a backup of production and restore it to QA and then run a scrub script to remove sensitive information. (eg change all the credit card numbers to a test cc number)

This allows you to have a copy of prod and allows you to test your backup. Yes, it takes a while but it does test your backup process.
Tom Kyte
September 27, 2008 - 11:19 am UTC

point (b) above :)

How about PROD to PROD

Robert, September 28, 2008 - 11:29 pm UTC

Tom,

Ok... I've got some things to think about...

But on a similar note... what about one-way replication in PROD-to-PROD environment?

In 10.2.0.4 we are using a crude materialized view refresh
to do a periodic one way sync from PROD-A to PROD-B databases.
Could STREAMS be a better approach for this instead of MV?

Thanks,

Robert.
Tom Kyte
September 29, 2008 - 9:18 am UTC

why do you call a materialized view "crude"? It is quite simple to set up, to monitor, to maintain - very few moving pieces. If I could do what I needed with a read only materialized view - that would be it (it is easy)

Maybe crude in its implementation...

Robert, September 29, 2008 - 7:00 pm UTC

Tom,

It seems crude to me because the guts of the MV is this...

select * from table@remote_db

And every time it runs (once or twice a day) there is heavy i/o on database.

I inherited this MV and the basic design/philosophy of the thing.

Is there a better way to update a materialized view than just a mass "select *" periodically?

Thanks,

Robert.
Tom Kyte
September 29, 2008 - 8:25 pm UTC

do you not have materialized view logs on the remote site?

No MV logs....

Robert, October 11, 2008 - 8:54 am UTC

Tom,

No sir, we do not have any MV logs for these views.

It was set up very crudely (thought effectively in a simple way).
I'm not sure if there was any method to this 'madness'
or if it was just designed this way out of naivete.

Would MV logs be the way to go then;
dispersing the activity/load of the remote table updates over time
instead of a massive update once a day?

Thanks,

Robert.
Tom Kyte
October 13, 2008 - 2:48 am UTC

adding MV logs now will negatively impact the existing system - it will 'add work during modifications'.

adding MV logs now will positively impact the existing system - it will 'reduce the work done during a refresh'

So, do the cost benefit/trade off analysis here.

Streams might be something to consider at this point, yes, since you are in the middle of considering making a MASSIVE change to both systems.

Streams replication and flashback

Stew Ashton, October 13, 2008 - 8:27 am UTC


I have received a "requirement" quite similar to the original question:

- keep a copy of a production database "in sync"
- on demand suspend the sync process, then
- do some updates on the copy, then
- back out the updates and reactivate the sync process.

One approach being considered is :

- Use Streams Replication to update the copy
- Suspend the "apply process"
- Grab the SCN on the copy database
- do the updates
- Flash back the copy database (or the appropriate tables) to the "old" SCN
- restart the apply process.

I do not like this whole idea. However, like it or not, it would be easier to discard this idea if I knew it didn't work.

I gather from your original answer that doing any updates to the copy will break replication for good. Is that true for this specific scenario as well?
Tom Kyte
October 14, 2008 - 4:53 pm UTC

11g does this with a physical standby database - out of the box - it is a function of the supplied software....

You can do this in 10g as well, you would have a physical standby. You would stop the redo stream from flowing from production to standby. You could then open standby read/write - do something - close it, flash it back and resume redo stream. Now, while you were testing, the standby would not be a standby (the redo stream STOPPED flowing - 11g lets it continue even as you are testing - you are always protected).


Scott Mattes, October 16, 2008 - 10:54 am UTC

What about using dbms_wm?
Tom Kyte
October 17, 2008 - 9:02 pm UTC

dbms_wm works within a single database - it does not seem to meet the basic requirement?

sounds like they want a testing instance, as a copy of production, to safely test things - not in production.

dbms_wm would have a rather large impact on production day to day, every second just to version. (queries and modifications)

it would be in a single database (not safe to test in)


All are very nice ideas

Adarsh Kumar, October 18, 2008 - 12:17 am UTC

I think another way of making a test database(B) which can be in synch by using the following idea:

1)Configure streams between Database A --> B .
2)Make sure Conflict handler is Off on Tables on DB B
2)Synch Database B using streams
3)Stop Apply process on B
4)Now you can Make any changes on Database B and test the way you want
5)Once you are done flashback database B and start apply process. This will again synch the database B.

Thanks - Adarsh Kumar
Tom Kyte
October 18, 2008 - 10:02 pm UTC

or, as I said, data guard - sort of does that without the hassle of what you have setup.....


Streams replication and flashback, revisited

Stew Ashton, October 18, 2008 - 9:42 am UTC


@Adarsh Kumar : your idea is exactly the same as the one I was trying to describe on October 13. Have you actually gotten this to work?

@Tom : I was witholding embarrassing information, sorry about that. Database B would be a small subset of database A (thus the idea of replication rather than Data Guard) and would be used as a substitute production database while A is stopped for cold backup and batch processing. Then when A is back in business, applications would apply B's changes to A.

Now how awful is that? For over a decade they have been doing TP all day and backup + batch processing at night and they don't know how to move on. Now they want to do some TP while the production database is down, so they came up with the idea of temporarily using database B.

My question above was assuming at least version 10gR2.
I have since found out that, not only is database A still in 9.2, but they are still using rollback segments. I just don't see any way forward without starting over and using the database as it was intended to work.
Tom Kyte
October 18, 2008 - 11:10 pm UTC

... and would be used as a substitute production database while A is stopped for cold backup and batch processing. Then when A is back in business, applications would apply B's changes to A. ...


a) why would you cold backup? you are in archive log mode (must be) and hot is easier and infinitely better than cold. you would NEVER EVER cold backup, that would be a waste.

b) if you do batch - well, umm, i don't see how you could use the other database, it would be update anywhere replication - and update conflicts would abound, just use a single database - i don't get the advantage of two databases - I only see serious and harsh disadvantages.

re: substitute production database

Stew Ashton, October 19, 2008 - 4:33 am UTC


...a) why would you cold backup?...
They are using TimeFinder (which copies the disk files) and don't dare do an update while it is doing its thing.

...b)...I only see serious and harsh disadvantages...
You said it.

For what it is worth, I will strongly recommend doing what it takes to use the single database, even if it means (gasp!) learning how Oracle really works. No matter what, the application will have to deal with batch updates and TP updates coexisting.
Tom Kyte
October 21, 2008 - 11:21 am UTC

a) I don't understand what timefinder has to do with anything. HOT or COLD you would have the same issues.

USE HOT, PERIOD.



Re: TimeFinder

Stew Ashton, October 21, 2008 - 3:53 pm UTC


I believe the idea is TimeFinder will copy the files and then you use RMAN on the copies rather than the original. I have no idea why anyone would consider this necessary, unless they think they are still working on mainframes from the '80s.

I shall continue to recommend changing to hot backup, and I shall enjoy quoting you IN UPPERCASE.
Tom Kyte
October 22, 2008 - 8:02 am UTC

then you are just using rman to do image copies - and you have no more functionality then if you were doing it yourself in 1985.

And I still have no idea why you would not do hot, you would

a) put tablespaces in backup mode
b) timefinder split
c) take them out of backup mode
d) copy the split mirror
e) re-silver

You would never shutdown for something this trivial, you flush the caches, you prevent work from happening - don't do it.

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.