Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, oracle.

Asked: January 09, 2017 - 10:03 am UTC

Last updated: January 12, 2017 - 1:52 am UTC

Version: 11.2.0

Viewed 1000+ times

You Asked

Hi,

I have taking the expdp logical backup for schema it shows an error like -

ORA-31693: Table data object "CPG_PROD"."TBGPSDATA" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 37 with name "_SYSSMU37_99405081$" too small


Thanks



and Connor said...

By default, datapump unloads table data in a *consistent* format based at the table by table level.

So let's say your datapump

a) started at 9am
b) got to the table TBGPSDATA at 9:13am

As datapump continues to run, the TBGPSDATA data will be unloaded to file as the table was at 9:13am. So the unload of that table takes (say) 15mins to complete, then at 9:24 when we are still unloading the table, any changes the datapump unload sees to the table blocks that occurred AFTER 9:13am will be undone so that the view of the data is as per 9:13am.

So if you have lots of transactions occurring on TBGPSDATA at the same time as you are unloading it, we might not be able to resurrect the data as it was back at 9:13am.

Check your 'undo_retention' parameter and the size of the your undo tablespace. You might need to increase these, or run the datapump at a 'quieter' time.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

oracle oracle, January 10, 2017 - 11:10 am UTC

table size = 335 GB

SQL> show parameter undo

NAME TYPE VALUE
------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1

Thanks

Connor McDonald
January 11, 2017 - 6:06 am UTC

900 seconds means your unload of that table must complete within 900 seconds (or similar) otherwise you run the ora-1555 risk.

Perhaps look at unloading in parallel, or increasing the retention parameter.

Alex, January 11, 2017 - 4:42 pm UTC

I always thought datapump was consistent for all tables at the time it began. So in the above example if it started at 9am, and it reached TBGPSDATA at 9:13, that it would unload the 9am version along with everything else. Seems like that could cause problems with fks etc if the tables were dumped at vastly different times if it was a long export.
Connor McDonald
January 12, 2017 - 1:52 am UTC

By default, I'm pretty sure its table level.

eg in one session I did this:

drop table scott.a purge;
drop table scott.z purge;

create table scott.a ( x timestamp );
create table scott.b ( x timestamp );

declare
  ts timestamp;
begin
 loop
   ts := systimestamp;
   insert into scott.a values (ts);
   insert into scott.b values (ts);
   commit;
 end loop;
end;
/



and then datapump looked like this:

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SCOTT"."A"                                 12.15 MB  796555 rows
. . exported "SCOTT"."B"                                 12.50 MB  819595 rows
. . exported "SCOTT"."ABC"                               5.898 KB       1 rows
. . exported "SCOTT"."DEPT"                              6.031 KB       4 rows
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.960 KB       5 rows
. . exported "SCOTT"."T"                                 5.054 KB       1 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:


You can use CONSISTENT=Y (a legacy mode param) or FLASHBACK_TIME to override this.

Alex, January 12, 2017 - 4:19 pm UTC

Ahhh yes you're right. I forgot our scripts I've always used in the past had the consistent=y flipped on. The problems I was remembering with fk constraints is valid but the situation was a little different....we were running with consistent=y but as SYS :(

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.