Skip to Main Content
  • Questions
  • Schema export with Application level consistancy

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, JItendra.

Asked: May 25, 2016 - 2:12 am UTC

Last updated: May 26, 2016 - 7:46 am UTC

Version: 11.2.0.2

Viewed 1000+ times

You Asked

I need to take schema level application end consistent backup using datapump expdp utlity.
I used FLASHBACK_TIME parameter but not able to get expected result.

My Application performs realtime procedure execution and each procedure has multilevel commit operation. requirement over here is to export either all commited transaction as part of this procedure or NONE.

for ex. Procedure execution started 9:00 PM and continue till 11:00 PM, Now at 10 PM I initiated expdp backup using flashback_TIME=systimestamp so it will capture all commited transaction happened between 9:00 Pm - 10:00 PM but will not capture between 10:00 PM - 11:00 PM. But this is not desired as either we need data before 9 or we need complete 9:00Pm - 11:00 PM transaction happened .

Please help me how can i implement it without causing any downtime for Application

and Connor said...

You can can do this for your flashback time:

select nvl( min(to_date(start_time,'mm/dd/rr hh24:mi:ss')),sysdate)
from v$transaction;

If there are no active transactions, you'll get "now". If there is an active transaction, you will get when it began, and you can use that as a flashback time.

That's close but *not* a guarantee. The *only* way you could guarantee it would be some sort of serialisation/locking, which obviously impacts your app.

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

JItendra sharma, May 26, 2016 - 5:02 am UTC

Thanks connor for quick reply.
Could you please help on your 2nd point as mentioned below

The *only* way you could guarantee it would be some sort of serialisation/locking

Is there anything can be implemented from DBA end
Any document or refrence to implement your solution
Connor McDonald
May 26, 2016 - 7:46 am UTC

Easiest way to describe is with one transaction. Your proc could so:


x := systimestamp;
insert into blah values (x);

Even with the check to v$transaction, there is a window of opportunity where things may look "out of sequence".

- we check v$transaction, its empty, and the time is 9:00:00.000000
- we start our export using that time
- then the insert above occurs at 9:00:00.001234
- but the value of *sysdate* it will *store* will be (say) 8.59.59.994623, because it was evaluated ever so slightly before the transaction commenced.

The data is in fact consistent to a point in time, namely, 9:00:00am. But when someone looks at the table "blah" in the source system, they will see a row dated 8.59.59.994623 that will *not* make it into the export, which was at 9am...hence everyone starts screaming about missing data etc etc.

How do you stop this ? With locking.

Your procedure would do:

dbms_lock.request('unique_name');
x := systimestamp;
insert into blah values (x);

and your export job do something like:

dbms_lock.request('unique_name');
start export
dbms_lock.release('unique_name');

would need to request the same lock as well...that way, by definition they will not clash.


export Application level consistent data

JItendra sharma, May 26, 2016 - 5:18 pm UTC

Thanks conn,
Could you please give some more insights on 2nd point

The *only* way you could guarantee it would be some sort of serialisation/locking,

How can we implement it ?
can we implement it from database end by any means?

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.