Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Jagjeet .

Asked: July 29, 2004 - 7:37 am UTC

Last updated: September 29, 2016 - 11:06 am UTC

Version: 9.2.0

Viewed 10K+ times! This question is

You Asked

hello Sir,

Could you pls. explain about this parameter consistent=y in exp.
What is cross table consistency.
Is it parent child relationship.



Thanks,
Js




and Tom said...

it makes all of the queries run by exp be "as of the same point in time-- consistent with regards to eachother"

imagine if you started an export at 9am of the EMP and DEPT tables.

EMP started exporting at 9am and DEPT at 9:15am.

Now, the EMP data would be "as of 9am", but the DEPT data would be as of 9:15am. What if you import that data now -- will it work? maybe, maybe not -- perhaps at 9:10am, someone fired the last employee in department 50 and deleted department 50. Your EMP export would have them in this deptno, your DEPT export would not have this deptno. The import would fail.

If you use consistent=y, Oracle will export all of the data "as of 9am", so deptno=50 will be included in all tables and the import will succeed.

Rating

  (42 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

consistent=y should be the default setting!

Robert, July 29, 2004 - 5:30 pm UTC

Tom,

Is 'consistent=y' the default value?
If not... why not?

Thanks!

Robert

Tom Kyte
July 30, 2004 - 6:58 am UTC

$ exp help=y

shows the defaults.

No it is not the default.

I think exp is the tool with the defaults in reverse sometimes :) it has compress=y which is "really bad" as the default.


8.1.7.4 Doesn't show default for 'consistent'

Robert, July 30, 2004 - 8:39 am UTC

Tom,

Oracle 8.1.7.4

Thanks for the feedback.
I did check originally using 'exp help=y' but it doesn't show the default value for 'consistent' as it does for some of the other parameters (maybe this help=y behavior is just on 8.1.7.4?)
But it does show this in the documentation (which I looked at after sending the question).
Sorry for the dumb question (1)... but thanks for the insight for the second question.

Thanks,

Robert

Tom Kyte
July 30, 2004 - 4:47 pm UTC

look at that, it lists it but doesn't have the default for it in 8i, hmm.

Thanks For Reply.

Jagjeet Singh, July 31, 2004 - 8:28 am UTC

Thanks for your reply.

Do I need to configure large Undo tablespace at the time of export to prevent from *snapshot too old* if lot of big updations are going on at the same time ?


Thanks,
Js


Tom Kyte
July 31, 2004 - 12:15 pm UTC

you configure a retention period and ensure you have sufficient space for your undo to grow if need be. the undo advisor (enterprise manager or a v$ view) can tell you how much undo space you would need if you plan on setting a retention period of "x units of time"

Consistent=y Not working for me

Jagjeet Singh, November 06, 2004 - 1:54 am UTC

Hi ,

I was trying to check the consistent parameter in export
utility.But its not working for me ..

I setup this small test case

create table a(aa number parimary key );
create table b as select * from <<big_table>>
create table c (aa references a );


I have this script which will insert 1 row in both parent
and child table after 1 second and commited the record.

[ora9@Js oth]$ cat ins.sql

begin
delete from c;
delete from a;
for i in 1..30 loop
insert into a values (i);
insert into c values (i);
commit;
DBMS_LOCK.SLEEP(1);
end loop;
end;
/
exit;

-- running this script in background.

[ora9@Js oth]$ sqlplus -s "/" @ins.sql &
[1] 2877
[ora9@Js oth]$

### running exp. with consistent=n [default value]

[ora9@Js oth]$ exp owner=ops\$ora9 file=f.dmp

Export: Release 9.2.0.1.0 - Production on Fri Oct 29 01:57:44 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Username: / as sysdba

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$ORA9
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$ORA9
About to export OPS$ORA9's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$ORA9's tables via Conventional Path ...
. . exporting table A 16 rows exported
. . exporting table B 99088 rows exported
. . exporting table C 20 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

#### exp takes 4 second at B table.

[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$[ora9@Js oth]$
[ora9@Js oth]$
PL/SQL procedure successfully completed.

[1]+ Done sqlplus -s "/" @ins.sql
[ora9@Js oth]$

#### again in background

[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$ sqlplus -s "/" @ins.sql &
[1] 2899

### Again exp with consistent=y for cross tables consistency.

[ora9@Js oth]$ exp owner=ops\$ora9 file=f.dmp consistent=y

Export: Release 9.2.0.1.0 - Production on Fri Oct 29 01:58:39 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Username: / as sysdba

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$ORA9
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$ORA9
About to export OPS$ORA9's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$ORA9's tables via Conventional Path ...
. . exporting table A 10 rows exported
. . exporting table B 99088 rows exported
. . exporting table C

16 rows exported. ### 16 record ?? expected 10 rec.

exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
PL/SQL procedure successfully completed.


[1]+ Done sqlplus -s "/" @ins.sql
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$
[ora9@Js oth]$


Sir, I was expecting second time oracle should export
the same no. of records for table A and B.

Thanks,
Js




Tom Kyte
November 06, 2004 - 10:55 am UTC

Ahh -- they tricked you -- read only transactions are NOT SUPPORTED for "sys" (as sysdba or otherwise)

and they are using a read only transaction.

the documentation has this to say:

<quote>
CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are using AS SYSDBA, or both.
</quote>


You would either

a) not perform this export as "sys"
b) use flashback_scn or flashback_time if you do this as "sys"

Using a function in FLASHBACK_TIME

Bill Geiss, November 09, 2004 - 9:25 am UTC

Tom,

We are trying to use expdp from a script that will run on a periodic basis from a cron job. We would like to set flashback_time to be sysdate, but we can't seem to get it to run when we run

expdbp ... FLASHBACK_TIME='"to_timestamp(sysdate)"'
(we've tried lots of variations . . . )

When we hardcode a date it works fine. We are on AIX.

Any thoughts?

Tom Kyte
November 09, 2004 - 9:41 am UTC

since you are in a script, use


expdp ... flashback_time=`date +"%d-%b-%G %l.%M.%S.%N %p"` .....


for example.



flashback_time gives ORA-39050: parameter TABLE_CONSISTENCY is incompatible with parameter

Anton Nielsen, November 09, 2004 - 4:36 pm UTC

Tom,

Thanks, I've managed to get my script to run with FLASHBACK_TIME and FLASHBACK_SCN, but I always get this error in my log file:
****
Setting flashback_time gives ORA-39050: parameter TABLE_CONSISTENCY is incompatible with parameter flashback_time
****
My log also always (not just when setting flashback_time/scn) has the following:
FLASHBACK automatically enabled to preserve database integrity.

I logged a TAR with support, but they have not been able to tell me if the error indicates my FLASHBACK_TIME is being ignored or if I am getting a consisten view of data, etc.

Thanks for any help you can give.


Tom Kyte
November 09, 2004 - 7:28 pm UTC

where you are getting that from ? what tool exactly?

expdp

Anton Nielsen, November 10, 2004 - 9:10 am UTC

Tom,

I'm running this (below is from command line, but also happens from a script):

expdp spirit_hd/pw@spirita schemas=spirit_hd directory=hddir exclude=TABLE:"LIKE'CNV_%'",TABLE:"LIKE'BRIDGE_AUDIT%'",TABLE:"LIKE'BDGTEST%'" content=data_only dumpfile=test9.dmp logfile=test9.log FLASHBACK_SCN="12345"

(also happens with FLASHBACK_TIME)

The export runs to completion, with one error. When I check the log file (test9.log) it shows:
------
ORA-39050: parameter TABLE_CONSISTENCY is incompatible with parameter flashback_scn
FLASHBACK automatically enabled to preserve database integrity.
------

This database (10.1.0.2 on AIX) was cloned (not sure how) from another database that is running dataguard. That is, the source database is a master and has a physical dataguard standby db associated with it.

Thanks Again,

Anton




Export with query option

A reader, November 18, 2004 - 2:57 am UTC

Tom,

I tried to set a subset of data from the table. its failing.

query=\"WHERE RECORD_CREATION_DATE > TO_DATE('01/01/2004','MM/DD/YYYY')\"

LRM-00116: syntax error at ')' following 'MM/DD/YYYY'
LRM-00113: error when processing file '/appl/supp4.par'

EXP-00019: failed to process parameters, type 'EXP HELP=Y' for help

pls let me now, if anything wrong with my syntax

Thanks a lot

Tom Kyte
November 18, 2004 - 10:37 am UTC

seems you are using a parfile (gotta guess, you didn't actually show us...)


you don't escape in a parfile, just try

query = "where......"

you'd need the \" on the command line, for the shell -- not for exp.



Now its working !

A reader, November 19, 2004 - 3:49 am UTC

Thanks tom, removing the \" from parameter file. it works

Ana, February 04, 2005 - 2:14 pm UTC

"you configure a retention period and ensure you have sufficient space for your undo to grow if need be."

- meaning the retention period configured for should not be less than time taken for the export?

Thanks
Ana

Tom Kyte
February 04, 2005 - 2:26 pm UTC

correct (and the size of undo should be large enough to hold that)

Consistent =Y in SQL

Matt, April 27, 2005 - 11:50 am UTC

I'm looking at a database where all table data can be identified by a particular key value. As such it is possible to to 'horizontally slice' data based on this key value. Additionally there is functionality to copy all data associated with one of these key values.

Now, this 'key value copy' works a table at a time. It occurred to me that after the first table has been copied, another user could modify the data in a later table, whilst the copy is in progress. The result would then be an inconsistent data set.

What options are there available to take a consistent copy of this data (in order to then re-insert it elsewhere ie: must have read - write access).

All I can come up with is a "domino copy" eg: gather tab1 data; gather tab 2 data based on tab1 data; gather tab3 data based on tab1,tab2 data etc.

The existing process considers each table in isolation.

I'm looking for something that will minimise the impact on the existing process, if possible.

THanks in advance.










Tom Kyte
April 27, 2005 - 12:05 pm UTC

for a long running transaction such as that -- flashback query.

before you start, use dbms_flashback.get_system_change_number to get the SCN in place...

then use "as of scn :n" in all queries to retrieve the data as of the same point in time.

As long as you have sufficient undo, there you go.

Nice answer

Matt, April 27, 2005 - 1:13 pm UTC

Exactly what I need.

compress=y which is "really bad", Why?

Sami, April 27, 2005 - 9:30 pm UTC

Dear Tom,

Earlier,in the same thread, you mentioned that
<Asktom>
I think exp is the tool with the defaults in reverse sometimes :) it has compress=y which is "really bad" as the default.
</Asktom>

Why compress=y is really bad? We are exporting with compress=Y in our 8i database. Kindly explain.

Thanks in advance.


Tom Kyte
April 27, 2005 - 9:52 pm UTC

compress=y creates a create statement in the dmp file that trys to allocate an initial extent "big enough" to hold all of the data -- as computed by summing up the existing extents

a) this is almost always way to big
b) it promotes the ancient myth that a single extent is desirable
c) it overallocates space
d) it makes imports fail because they cannot allocate that whopping big initial extent.

for starters..

consistent=y

hinglaj singh, April 28, 2005 - 6:43 am UTC

i was using this command from my time but today i get the exact meaning. thanks tom

Tom Kyte
April 28, 2005 - 8:30 am UTC

means if you start the export at 11am and it finished at 2pm (3 hours) ALL tables would be extracted "as of 11am", it would do all tables "consistent with eachother with regards to time, they would be all AS OF the same point in time"

Alternative to falsback for consistent query

Matt, April 29, 2005 - 6:17 am UTC

I just wondered why in your response to my query above regarding consistent "horizonatal slicing of data" you did not suggest modifying the isolation level for the duration of the transaction.

I don't see a big problem with (isalation) this as I would be selecting data and inserting data (no deletes or updates)

Ta.

Tom Kyte
April 29, 2005 - 8:37 am UTC

serializable is for short short transactions. I anticipated this one to be long.

didn't know if they would be doing inserts/updates/deletes (i imagine they could well do updates, deletes to a log table or something and if they run more than one of these "slice em dice operations" -- 8177 can happen because it isn't purely at the row level).

long running serializable transactions run into 8177 problems in general, the longer they run -- the higher the chance.

The flashback would suffer from possible 1555's (but so would serializable) but not 8177's

Export of selective tables given by a query

ANOOP GUPTA, May 19, 2005 - 4:22 am UTC

Hi Tom,
Is there any way by which we can export selective number of tables which are begin given by a query.
Since we don't want a hard coded set of table.

Tom Kyte
May 19, 2005 - 8:01 am UTC

take result of query and spool to a parfile and use the parfile=filename option.

that is, put the list of tables in the parfile:

tables=(.....)

exp of selective tables

Anoop Gupta, May 24, 2005 - 10:14 am UTC

Thanks tom, Its really very useful.

read-only tablespace, read-only database

Sokrates, October 25, 2006 - 7:02 am UTC

assume that I want to export data from a read-only database
resp. from a read-only tablespace.

Then, consistent=y, is not needed, ok ?
Will exp be faster if I omit it ?

Tom Kyte
October 25, 2006 - 9:40 am UTC

it will be not any faster nor slower.

FLASHBACK_SCN

abz, January 16, 2007 - 8:39 am UTC

The documentations says that the default value
for FLASHBACK_SCN is none. FLASHBACK_SCN is not a
required parameter. My question is, if we do not use this
parameter, what its value will be?

Also, how can we get the latest scn no from our database.

SYS and flashback

Igor, March 28, 2007 - 7:12 am UTC

Hi Tom,

I found you said
>> b) use flashback_scn or flashback_time if you do this as "sys"

For 10.2 documentation says that
"SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Export or Import as SYSDBA, except in the following situations:
* At the request of Oracle technical support
* When importing a transportable tablespace set"

On Metalink we can find that actually also flashback is not supported by SYS - note 277237.1:
"Queries by SYS will return changes made during the transaction even if SYS has set the transaction to be READ ONLY. Therefore export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and FLASHBACK_TIME cannot be used."

Using
exp \'/ as sysdba \' full=y flashback=786018
I got
"EXP-00103: The FLASHBACK_TIME parameter was invalid
EXP-00008: ORACLE error 8185 encountered
ORA-08185: Flashback not supported for user SYS
ORA-06512: at "SYS.DBMS_FLASHBACK", line 12
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully
"
Is it some ambiguity SYS vs SYSDBA that I am not aware ?

Another question: I suppose that using flashback_scn would be more precise than consistent option. In which condition should we use consistent=Y instead of flashback_scn=... ?
Is it consistent=Y just there for 'historical reasons' ?

Thank you and regards.
Tom Kyte
March 28, 2007 - 12:09 pm UTC

yup, mispoke there. you would not use sys or sysdba at all.


flashback_scn is not "more precise". consistent=y uses the "current SCN", flashback_scn lets you pick a DIFFERENT one, not a "more precise" one.

Thank you

Igor, March 29, 2007 - 5:02 am UTC


expdp

Igor, March 29, 2007 - 9:01 am UTC

Hi,

Do you know what is the reason that exdp does not contain consistent parameter ?
So, this way, we need to search TIME/SCN before running command.

Thank you and regards.
Tom Kyte
March 30, 2007 - 12:54 pm UTC

nope, I don't know - but flashback_time=systimestamp would be "now"

As of "now"

Igor, April 02, 2007 - 5:51 am UTC

Hi,
I tried
expdp ... flashback_time=systimestamp

ORA-39001: invalid argument value
ORA-39150: bad flashback time
ORA-01841: (full) year must be between -4713 and +9999, and

I guess I should not take it literaly but provide a value ?

Thank you and regards
Tom Kyte
April 03, 2007 - 8:36 pm UTC

to char it in the format your system wants to have...

to_timestamp solved issue

Igor, April 11, 2007 - 5:14 am UTC

Hi,
To give feedback
to_timestamp( systimestamp )
worked correctly in my case.
Haven't seen drawbacks with this so far.


exp consistent=y and ora 1555

JL, May 11, 2007 - 12:21 pm UTC

Tom:

i have Oracle 9.2.0.6, every day i make a exp full=y consistent=y. that takes around 90 minutes. Since 2 days ago, i found Ora-1555.
yesterday i check the commits in processes and put them only at the end of programs.

Reading this thread and your books expert oracle one on one and efective oracle by design, and if i understood them:
a) i don't have to create a big RBS for transactions, because i am on 9i and they are locally managed. (¿?) (very confused here, documentation says that will be deprecated).

http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96521/undo.htm#9114
"The use of rollback segments for managing undo space will be deprecated in a future release. Oracle strongly recommends that you use automatic undo management and manage undo space using an UNDO_TABLESPACE."

b) i can do a exp full=y consistent=n when i have lot of transactions on bd.
c) i can do a exp full=y consistent=y when i don't have lots of transactions on bd.


is that rigth? (i have rman backup too, but i don't want to loose dmps).

thanks in advance,
JL
Tom Kyte
May 11, 2007 - 1:37 pm UTC

a) you are confusing terms here

locally managed is an extent management thing for tablespaces

do you mean "automatic undo management" - I believe so...

b) you would set your undo_retention to be more than 90 minutes
c) see b)

I just hope this isn't your backup strategy. there is only one correct backup approach - and that uses, well, backups. Not logical copies of data.

thanks

JL, May 11, 2007 - 3:30 pm UTC

Tom

Now i understand, the key is "undo retention period is a suggestion".
3 months ago, my undo tbs had no maxsize. it grew from 4G to 26G. So i change undo_retention from 3 to 1 hour (bad choice).

...the book says it in bold...

thanks again.
JL

can data pump used for different OS

Sandy, November 03, 2009 - 7:17 am UTC

dear Tom,

i have database in production on AIX server and i want to use Data pump to export table into my local Oracle Server which is on Windows. Can this be done.

I have exported tabes from Production.
ExpDp is successfully done.

when i tried to do the Import i got following error while Impdp.

C:\Documents and Settings\tm-it-319>IMPDP Scott/tiger Directory=DATAPUMP1 DumpFile=Dump.dmp

Import: Release 10.1.0.2.0 - Production on Tuesday, 03 November, 2009 18:25

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31619: invalid dump file "C:\DBBckup\Dump.dmp"

Tom Kyte
November 09, 2009 - 2:27 pm UTC

data pump can be used over a dblink from database to database without a file



do you have the file on the windows machine? does c:\dbbckup\dump.dmp exist?

if it does exist, is it exactly the same size it was on AIX? Windows ftp if you used that, tends to corrupt files by default as it does a TEXT transfer (not smart of it, but it is windows after all, designed to be easy). You would do "type binary" before ftp'ing.


the dmp files are completely cross platform - unless you've corrupted it.

Consistency and constraints

Serge Shmygelskyy, June 02, 2010 - 12:00 pm UTC

Hi Tom,

As I understand, by default DataPump doesn't provide consistency. So what's going to happen with it if we need to export/import big amount of data over the dblink and the source DB is not restricted for updates? E.g.
moment 1
we start exporting table1 which is referenced by table2
moment 2
we've got new rows in the table1 and table2
moment 3
we start exporting table2 which now has rows referencing non-existing in the export file rows in the table1

What's going to happen during the import? Will the constraints simply fail and it will be needed to enable them manually?

Is it to way to go at all?

Thanks in advance
Tom Kyte
June 08, 2010 - 9:57 am UTC

when you say something like "by default" that typically implies "there are other options"

If you need a consistent datapump export - you can get one. If you have cross table constraints and therefore need a set of tables as of a point in time, you have that ability and would therefore make use of the non-default behavior.



consistent data pump export on Windows platform

Dan, June 22, 2010 - 2:14 pm UTC

Hi Tom

I need to take full database export datapump on Windows to migrate it to UNIX platform.

a) Should i use 'system' userid to do full datapump export or create another os authenticated database id with required datapump privileges ?


b) I see the following messages in the expdp log :

"FLASHBACK automatically enabled to preserve database integrity."

Is it still good idea to add FLASHBACK_SCN or FLASHBACK_TIME in the expdp script ?


c) Which consistent expdp parameter is more advisable to use while taking full database expdp ?

FLASHBACK_SCN Or FLASHBACK_TIME

Are they both does the same job regarding data consistency ?


Thanks!
Tom Kyte
June 22, 2010 - 6:35 pm UTC

a) never use system, never use sys, use your OWN accounts

b) already done for you, you would add those parameters to get a specific point in time - we'll do it as of the time of exporting.

c) depends. do you have a time, or do you know the SCN. The time is only precise within +/- 3 seconds (it is ABOUT 12:01:03, not exactly as of 12:01:03). The SCN is deadly precise - but you'd have to know what SCN you wanted to use and you probably do not. So, time is probably the one you want.


we use the timestamp to find an SCN to use, and that SCN will be very close to the point in time of your time - but not *exactly* at that point in time.

consistent datapump export

Dan, June 22, 2010 - 11:26 pm UTC

Thanks tom!

>> a) never use system, never use sys, use your OWN accounts

we have 15 database schemas. Instead of specifying each schema, we are planning to take full database expdp.

Are there any issue in using system or sys in expdp ?

We specify required schemas only during impdp .


>> b) already done for you, you would add those parameters to get a specific point in time - we'll do it as of the time of exporting.

This means that it is better not to specify these FLASHBACK parameters at all and accept the default which is anyway consistent at the time of export.

In my case, my full database expdp should be consistent as of the time of exporting. Any comments ?

Tom Kyte
June 24, 2010 - 6:30 am UTC

... Are there any issue in using system or sys in expdp ? ...

Yes, I said DO NOT USE THEM.

sys cannot flashback - sys cannot be used to perform consistent exports.

Just use YOUR OWN ACCOUNT, YOUR ACCOUNT, not sys, not system.




comments: if you want a consistent export, you better stop using SYS altogether. Period.

consistent export datapump

Dan, June 24, 2010 - 11:22 am UTC

Thanks Tom for the insight!
I will be using my own account to take full database expdp.

<< b) already done for you, you would add those parameters to get a specific point in time - we'll do it as of the time of exporting.

FLASHBACK is by default enabled by expdp.

Whats the FLASHBACK parameter Oracle users by default during expdp ?

I have also tested it without specifying FLASHBACK parameters and delete thousands of rows after expdp started. Verified that table was consistent as of export time and still all rows were available.

Just want to take your opinion, Do i still have to specify FLASHBACK_TIME/FLASHBACK_SCN or just not to specify these parameters at all ?



Tom Kyte
July 06, 2010 - 9:35 am UTC

in all cases, a table is always exported "as of a single point in time". However, many times you want ALL tables exported as of a single point in time. If you want the equivalent of the old export "consistent=y", you would use

flashback_time=systimestamp

to get the data pump export to be "as of the point in time the export began, every table will be as of the same commit point in time"



consistent export datapump

Dan, July 07, 2010 - 2:06 pm UTC

Hi Tom:

I was trying with FLASHBACK_TIME on WINDOWS environment, its giving me this errors. What i am missing here ?

LRM-00116: syntax error at ')' following 'mm-dd-yyyy hh24:'
LRM-00113: error when processing file 'expdp0626.par'

F:\backup\export\DataPumpexp>type expdp0626.par

DIRECTORY=PMIC_EXPDP_DIR
DUMPFILE=ExpDP_PMIC_FULL.dmp
CONTENT=ALL
LOGFILE=ExpDP_PMIC_FULL.log
FULL=Y
FLASHBACK_TIME="TO_TIMESTAMP(to_char(sysdate,'mm-dd-yyyy hh24:mi:ss'),'mm-dd-yyyy hh24:mi:ss')"
FILESIZE=20G
PARALLEL=16

F:\backup\export\DataPumpexp>expdp blpexp/bkp99exp parfile=expdp0626.par
LRM-00116: syntax error at ')' following 'mm-dd-yyyy hh24:'
LRM-00113: error when processing file 'expdp0626.par'


Tom Kyte
July 08, 2010 - 12:14 pm UTC

why not just use sysdate or systimestamp?

FLASHBACK_TIME in expdp

Dan, July 14, 2010 - 10:20 am UTC

<< why not just use sysdate or systimestamp?

I tried this but apparently the datapump export consistent time is 12:00:00 AM. It does not include the time components (hh24:mi:ss). How to specify sysdate ?

FLASHBACK_TIME="TO_TIMESTAMP(SYSDATE)"
Tom Kyte
July 19, 2010 - 12:55 pm UTC

flashback_time=systimestamp


period. if you to_timestamp(adate), you are to_timestamp( TO_CHAR(adate) ), which of course typically (depends entirely on your formats in place) loses the date portion


so, just use systimestamp.

Running Exp as Non SysDBA

Abd, November 16, 2010 - 1:50 pm UTC

Hi, 

Please pardon me if you think this is not a related Question to original EXP .

Problem Description: I can run exp utility when I use / as sysdba credentials but when I use another id (even with schema Owner or dba priviliges I get :
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user WEBSYSTEMD2
. exporting PUBLIC type synonyms
EXP-00008: ORACLE error 904 encountered
ORA-00904: : invalid identifier
EXP-00000: Export terminated unsuccessfully

We contacted Oracle and they ask as to run

SQL> SELECT owner, object_name, object_type,object_id FROM dba_objects WHERE status != 'VALID' ORDER BY 1,3;
SQL> select comp_id,comp_name,version,status from dba_registry order by 1;
SQL> show parameter java_pool
SQL> select * from dba_objects where object_name='DBMS_JAVA';

Do you know if any privileges have been revoked recently? i.e. has someone revoked the EXECUTE priv on the DBMS_JAVA package?

We came to know as part of some testing/recommendations from Audit team, Privileges on DBMS_JAVA was revoked by other DBA's, After granting Execute privs to non sysdba,( schema Owner) we were able to run EXP .

is there any issue/risk on granting DBMS_JAVA to schema owners.
Oracle Database Version is Release 11.1.0.7.0

Thanks

Tom Kyte
November 17, 2010 - 5:51 am UTC

Ask the audit team why they decided to revoke on it - what was their technical reason for doing so.

Then you can evaluate whether what you have done defeats their original intent.

exp table in timewise using flashback

starvin bose, February 02, 2011 - 7:55 am UTC

user updated values in one at 6pm, but updated the wrong informatin, now user want 5pm datas, how to recover, how to export a table in flashback methods using timewise.


pls help me
Tom Kyte
February 02, 2011 - 8:03 am UTC

you could just use the FLASHBACK table command to put it back the way it was.

Otherwise, just read about FLASHBACK_TIME with export.



http://docs.oracle.com/docs/cd/E11882_01/server.112/e16536/original_export.htm#SUTIL2679

starvin bose, February 02, 2011 - 7:55 am UTC

plz anyone help me am working in office, i have lot of pressure
Tom Kyte
February 02, 2011 - 8:03 am UTC

don't
we
all....

flashback_time=systimestamp does not work in some versions

Brandon, June 17, 2011 - 12:49 pm UTC

Above, you recommended using flashback_time=systimestamp to get a consistent expdp export, but that format does not work in some versions. I just tried on 10.2.0.4 and got the errors below, however it did work when I tried it on a an 11.2.0.1 database on Windows with patch bundle 6:

ORA-39001: invalid argument value
ORA-39150: bad flashback time
ORA-01841: (full) year must be between -4713 and +9999, and not be 0


I also found in the 11.2.0.1 database that they have snuck in support for the old consistent=y (or consistent=true) parameter, although I couldn't find it documented anywhere - it definitely works as you can see below:

E:\xyzdevDBExports>expdp system/gilbane@xyzdev tables=tridata.T_CLASSIFICATION,tridata.t_tristatus consistent=y dumpfile
=xyzdev_tst logfile=xyzdev_tst

Export: Release 11.2.0.1.0 - Production on Fri Jun 17 10:15:30 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "consistent=TRUE" Location: Command Line, Replaced with: "flashback_time=TO_TIMESTAMP('2011-06-17
10:15:30', 'YYYY-MM-DD HH24:MI:SS')"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********@xyzdev tables=tridata.T_CLASSIFICATION,tridata.t_tristatus fla
shback_time=TO_TIMESTAMP('2011-06-17 10:15:30', 'YYYY-MM-DD HH24:MI:SS') dumpfile=xyzdev_tst logfile=xyzdev_tst reuse_du
mpfiles=true
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TRIDATA"."T_CLASSIFICATION" 31.63 KB 5 rows
. . exported "TRIDATA"."T_TRISTATUS" 53.62 KB 97 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
E:\xyzDEVDBEXPORTS\xyzDEV_TST.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 10:15:36
Tom Kyte
June 17, 2011 - 1:58 pm UTC

I also found in the 11.2.0.1 database that they have snuck in support for the
old consistent=y (or consistent=true) parameter, although I couldn't find it
documented anywhere - it definitely works as you can see below:


http://docs.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_legacy.htm#CHDEDFFG



what are your nls formats in each of those databases, could be non-default formats getting in the way, you can use explicit conversions in that case with your own formats.

If flashback_time=systimestamp does not work for you

Christof, December 15, 2011 - 5:46 am UTC

If flashback_time=systimestamp does not work for you, you can use either
flashback_time=to_timestamp(localtimestamp) or
flashback_time=to_timestamp_tz(systimestamp)

As expdp should normally be run on the db server localtimestamp and systimestamp should be the same.

2 Christof

Konurbaev Evgeny, November 28, 2012 - 7:51 am UTC

flashback_time=to_timestamp(localtimestamp) helped me with Database 11.2.0.1 against errors:

ORA-39001: invalid argument value
ORA-39150: bad flashback time

Thank you

consistent=y and flashback_time as sys

Rob H, September 04, 2013 - 5:04 pm UTC

Invoking Data Pump Export

The Data Pump Export utility is invoked using the expdp command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.

Note:
Do not invoke Export as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.


This is directly from the documentation for Data Pump. Am I to assume that exporting as sys, even with datapump should be avoided?

This would contradict the statement :

"
You would either

a) not perform this export as "sys"
b) use flashback_scn or flashback_time if you do this as "sys" "
Tom Kyte
September 09, 2013 - 9:24 am UTC

sys can flashback query.

sys cannot have read only transactions.


it would still be entirely true to say "DO NOT USE SYS, PERIOD. SYS IS SPECIAL, THINGS WORK DIFFERENTLY FOR SYS. DO NOT USE SYS FOR ANYTHING", just use your own accounts.

EXP_FULL_DATABASE & IMP_FULL_DATABASE role

Jasper, January 21, 2014 - 10:53 am UTC

If consistant export is required, don't use sys or system => Ok. Got it!

How about...
creating a new user.
then grant EXP_FULL_DATABASE & IMP_FULL_DATABASE role to the new user.
Using the new user to export with "flashback_time=systimestamp" option
Does this consitutes as using the system user?
Will this new user be able to export in a consistant manner?

Thank you :)

Madhu, September 29, 2016 - 10:59 am UTC

Hi Tom,

I am glad to hear the consistent=y explanation from you.
Consistent=y Means, only committed data will be exported?
Consistent=n Means even uncommitted data also will be exported?
is above statements right? if wrong could you please elaborate some what in an understandable way.

September 29, 2016 - 11:06 am UTC

You will *never* export uncommitted data, because the export cannot *see* the uncommitted data.

Lets say my schema has 5 tables.

consistent=y means all 5 tables are exported as the same logical point in time.

consistent=n means the 5 tables are exported as the point in time as when each particular table was encountered in the export run.