Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, gonzalo.

Asked: February 26, 2018 - 4:06 pm UTC

Last updated: March 01, 2018 - 1:26 am UTC

Version: 11gr2

Viewed 1000+ times

You Asked

Hello,

I am making a full export with this command :

expdp system/systembase DIRECTORY=DATA_PUMP_DIR DUMPFILE=full.dmp LOGFILE=full.log FULL=y


My problem is that while doing the export, my disk fills up, and no more space is available.

Compress parameter is not an option because of license level.

So I think about of doing the full export by schema or tablespace.
I mean something like :

export tablespace1; compress exported tablepace1
export tablespace2; compress exported tablepace2
and so on.

My questions are:

1) Do you recommend me exporting tablespaces or schemas.
As of this : http://www.dba-oracle.com/t_data_pump_export_tablespace.htm
I guess schema?

2)
Which schemas or tablespaces should I export, in order to have my equivalent full export?

3) Is there another option or solution to this problem?

I leave you here the list of schemas and tablespaces just in case it is helpful



SQL> select distinct owner FROM   ALL_TABLES ;

OWNER
------------------------------
ADMIN
APEX_030200
APPQOSSYS
CTXSYS
DBSNMP
EXFSYS
FLOWS_FILES
MDSYS
OLAPSYS
ORDDATA
ORDSYS
OUTLN
OWBSYS
PERFSTAT
SCOTT
SYS
SYSMAN
SYSNET_CUM
SYSNET_DW
SYSNET_EAI
SYSNET_FAR
SYSNET_GAM
SYSNET_GG
SYSNET_HEX
SYSNET_HF
SYSNET_HMO
SYSNET_HPR
SYSNET_SLA
SYSNET_VER
SYSTEM
WMSYS
XDB

32 rows selected.

SQL> select distinct owner FROM   ALL_TABLES  WHERE owner NOT IN (SELECT username FROM   DBA_USERS WHERE  default_tablespace in ('SYSTEM', 'SYSAUX', 'USERS') OR account_status != 'OPEN') ;

OWNER
------------------------------
PERFSTAT
SYSNET_CUM
SYSNET_DW
SYSNET_EAI
SYSNET_FAR
SYSNET_GAM
SYSNET_GG
SYSNET_HEX
SYSNET_HF
SYSNET_HMO
SYSNET_HPR
SYSNET_SLA
SYSNET_VER



And tablespaces are:

SQL> select NAME from v$tablespace;
SYSTEM
SYSAUX
UNDOTBS1
USERS
STATSPACK_DATA
ADMINISTRACION
LABORATORIO
HISTORIA_CLINICA
SALUD
SYSADMIN
INDICES
INDICES_SLA
INDICES_GAM
INDICES_HPR
TEMP


and Connor said...

You can do schema at a time....but obviously, when it comes to import, you might have some dramas if there are some cross-schema dependencies. That might even be worse if you get to tablespace level. But choose the option which has the *least* cross-dependencies between the export files.

Because if that is the case, you might have to do something like:

- full export metadata only
- full import to get all the schema objects created
- then schema level imports for the data.

And it all gets very messy very fast.

But seriously....

Just.......get.............more..........disk.

So much easier. Tell them AskTOM told you to :-)

My experience over the years when someone says to you: "You cannot have any more space", what they are really saying "We cannot be bothered making the effort to get that space for you".

Terabytes of disk is unbelievably cheap nowadays.

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

compress on the fly

Racer I., February 27, 2018 - 7:03 am UTC

Hi,

I think on unix you can write to a named FIFO pipe instead of directly to a file. This can be read by gzip which writes a compressed file. Same during import only in reverse.

regards,

NB : works also as an external table where you can read from a shell-sript which simply unpacks a compressed file to stdout or pipes something from a remote server via netcat.

Good

gonzalo amadio, February 28, 2018 - 8:52 pm UTC

I have just told them that.. haha

But just to know. Suppose everything is clean, i mean no cross dependencies.

Which schemas / tablespaces should I export. All of them? All but the system schemas?

Thank you
Connor McDonald
March 01, 2018 - 1:26 am UTC

select schema_name from V$SYSAUX_OCCUPANTS

is an easy way of excluding those schemas you won't need in your target database.

But if that list of remaining schemas still exports to a large file, you still will need to split them out.

Other options to consider:

1) network mode. Don't create files at all - just datapump import directly across the network from the source.

2) Set the FILESIZE parameter so that multiple files are created. So as file "n+1" is being written, you can compress file "n". Note - you can't do that on the *first* file, because it is written to *throughout* the entire process.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.