Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, oracle.

Asked: November 10, 2017 - 5:04 am UTC

Last updated: November 23, 2017 - 1:02 am UTC

Version: 11.2.0

Viewed 1000+ times

You Asked

Hi team,

We are taking daily export of schema with expdp But for a few days we are continuously getting error saying -

snapshot too Old.
Table is a partitioned table weekly base.

And the script which we are using for expdp is -

expdp user/pass@dbname directory=dumpdir dumpfile=produser.dmp logfile=produser.log consistent=no compression=ALL

How to overcome with this error.

Thanks

and Connor said...

Your options are one or more of:

- a longer undo retention setting
- do less *other* work during the export
- look at using PARALLEL to complete the data pump in a shorter time frame

The other thing is to ask: "Why am I taking the export?"

For example, if it is to populate other environments then:
- *Maybe* an RMAN duplicate is a better option
- *Maybe* transport tablespace is a better option

For example, if it is for a backup, then
- *Maybe* an RMAN backup is a better option

etc


Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

oracle oracle, November 13, 2017 - 4:59 am UTC

Thanks for reply.
Connor McDonald
November 13, 2017 - 3:23 pm UTC

glad we could help

oracle oracle, November 15, 2017 - 10:31 am UTC

One more question could You Please let me know how the export process works due to this we are getting this error.
Connor McDonald
November 16, 2017 - 1:29 pm UTC

An export is logically equivalent to a query on each table in the export.

So its not really an export issue, but a query length issue.

I start exporting a table at 9am - so I need the rows as they were at 9am.

If that table is huge, and takes (say) 20mins to read, then by the time I am reading blocks at 9:20am I have to rollback any changes made between 9:00 and 9:20 to get a consistent view of the data.

If I cannot do that ... then "snapshot too old"

oracle oracle, November 20, 2017 - 12:26 pm UTC

Thanks for reply.

I understood the process but i have added one parameter which is -

1. consistent=no then also the same error i am getting.

And one more=
2. In normal exp it is working fine without any error.
Connor McDonald
November 21, 2017 - 1:20 pm UTC

- a longer undo retention setting
- do less *other* work during the export
- look at using PARALLEL to complete the data pump in a shorter time frame

oracle oracle, November 22, 2017 - 9:21 am UTC

Thanks for reply.

expdp test/test directory=datapump dumpfile=test%u.dmp logfile=test.log parallel=4 compression=all

As you suggested i have used parallel option and backup completed successfully it generated -

test01.dmp
test02.dmp
test03.dmp
test04.dmp

Now after a few days i wanted to restore the exported dumpfiles and the log files is deleted by mistake so how can i check the test01.dmp test02.dmp test03.dmp and test04.dmp is parallel using dumpfile.

Because every backup has same name convention like -test05.dmp test06.dmp ...are also there.

In short can i check the exported dumpfile is parallel using or this 4 files are parallel file so that i can mention that file names at the time of import.

otherewise i try it is showing error -
dumpfile set is incomplete.

Thanks
Connor McDonald
November 23, 2017 - 1:02 am UTC

As long as *potential* dumpfiles are included you should be ok, eg:

C:\Users\hamcdc>expdp userid=mcdonac dumpfile=dp%U directory=temp parallel=2

Export: Release 12.2.0.1.0 - Production on Thu Nov 23 08:53:31 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "MCDONAC"."SYS_EXPORT_SCHEMA_01":  userid=mcdonac/******** dumpfile=dp%U directory=temp parallel=2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
...
...


and then for the import, I never specified the full set of files - just the same wildcard:

C:\Users\comcdona>impdp directory=TEMP dumpfile=dp%U.dmp userid=mcdonac@pdb122a

Import: Release 12.2.0.1.0 - Production on Thu Nov 23 08:57:54 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_FULL_01":  directory=TEMP dumpfile=dp%U.dmp userid=mcdonac/********@pdb122a
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/CLUSTER/CLUSTER
Processing object type SCHEMA_EXPORT/CLUSTER/INDEX
...
...



But obviously

"Because every backup has same name convention like -test05.dmp test06.dmp ...are also there."

is not a great option - choose names that make it easy for future reference, eg the date in the filename etc.

More to Explore

Utilities

All of the database utilities are explained in the Utilities guide.