Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, lawrence .

Asked: May 02, 2000 - 1:21 pm UTC

Last updated: March 08, 2007 - 10:04 am UTC

Version:

Viewed 10K+ times! This question is

You Asked

exporting when the export is larger than 2gig on a HP
unix server AND NOT USING COMPRESSION.



and Tom said...



You would export to a device that does not support seeking such as a tape (not recommended, really slow) or a pipe.

Why not using compression? it'll considerably cut down on the size?

I myself use both compression AND split to make my export be in many managable sized file (500meg is my chosen size). You could just use split and not compress if you want.

Basically, you would create a pipe in the OS via:

$ mknod somefilename p

and then export to that pipe. you would set up another process in the background that 'eats' the contents of this pipe and puts it somewhere. I use split, you could use 'cat' to just put it into another file (if cat supports files >2 gig -- thats the problem here, most utilities do not, you need to use a special file io api to 2 gig file support).

Here is a script you can use as a template. Yes, it uses compression but you can take that out. Its here to show you one method of doing this.


------------------------------
#!/bin/csh -vx

setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp

setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"

echo $FN

cd /nfs/atc-netapp1/expbkup_ora8i
ls -l

rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p

date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &

# uncomment this to just SPLIT the file, not compress and split
#split -b $MAXSIZE $PIPE $FN. &

exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log


date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &

# uncomment this to just SPLIT the file, not compress and split
#cat `echo $FN.* | sort` > $PIPE &

imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test

tail expbkup.log
tail export.test

ls -l
rm -f $PIPE
--------------------------------------------------

This also always does an 'integrity' check of the export right after it is done with an import show=y, that shows how to use these split files with import.



Rating

  (34 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Exporting more than 2 GB Compressed

scott clark, June 14, 2001 - 2:27 pm UTC

Hi Tom,
Thanks for the script. I haven't actually checked the size of the export file uncompressed and I don't think we are going to have enough disk space to support splitting the export without compression, but I will need to verify.
Once we get a 64 bit OS at year end, this limitation will be resolved until we hit the other magic number of 4gb.

I have friends who have 5GB compressed files on Solaris. Why are they able to surpass the 4GB limitaition assuming they are using a 64 bit OS?

Great script, but how to import?

Andrew Cohen, November 05, 2001 - 8:31 pm UTC

Great script. I see you can import through the pipe on the same machine, but now if I ftp it to a new machine, how do I piece the files back together? -- something like?:

gunzip -c *.dmp.* > $PIPE
imp ... file=$PIPE ...

but does the *.dmp.* guarantee that it will gunzip the files in the right order?

Tom Kyte
November 05, 2001 - 8:51 pm UTC

cat `echo $FN.* | sort` | gunzip > $PIPE &



sorts the filenames, sends them to cat, which give them to gunzip in the right order.

Something interesting

Mark Griffin, July 10, 2003 - 9:39 am UTC

Tom I implemented your script into our production environment(sun solaris 5.7, Oracle 9i db 9.0.1.4).

I changed parameter MAXSIZE 4000m(2 gig I believe).
I then submitted script to job queue. Job ran fine in that export completed successfully(according to log).

However, I also had same job to run in crontab and I'm getting the following:
. . exporting table WWV_DOCUMENT$
EXP-00056: ORACLE error 24801 encountered
ORA-24801: illegal parameter value in OCI lob function
. . exporting table WWV_DOCUMENT$_ARCHIVE
EXP-00065: Error writing lob to the dump file.
. . exporting table XREF_DISC 142 rows exported
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
Thu Jul 10 00:15:36 CDT 2003

Do you have any clues as to what is problem might be ?


Tom Kyte
July 10, 2003 - 10:24 am UTC

4000m is NOT 2gig

2048m is 2gig


you don't have large file support and that is what this is saying.

ahhh okay but...

Mark Griffin, July 10, 2003 - 11:47 am UTC

Well what can i say after i "really" calculated what 4000m/1024=total gig yes you are right ;).


However this still doesnt address fact that this script ran successfully via job submittal(to job queue) but was not successful as crontab run ?

The other puzzling thing to me is that previously we were performing one big dump of db. db is currently 46 gig in size and export didnt have problem with size of file in that it didnt complain with ORA-24801 error ?


I guess this leaves question of:
Even though my job submittal was sucessful in sense it completed if my i have valid backup(no corrupt data) ? According to my log file everything appears to be okay.



Tom Kyte
July 10, 2003 - 1:28 pm UTC

I can only guess then when run as a job -- the database it was run against did not create any files >2gig.

Or, look for environmental differences. If you run the "cron job" from the command line -- does it succeed or fail?

what were the file sizes from the "oem job" one?

I believe...

A reader, July 10, 2003 - 1:45 pm UTC

First off, thank you much for all your help.

That Im going to have to research the cronback job submittal side of things. I believe what is happening is the fact that cronback is so barebones(in that it really knows nothing about environment other than what you tell it) its running into some type of ulimit or other parameter setting problem.




It is very usefull for our migration

Tushar Solanki, December 09, 2004 - 11:27 am UTC

You provided the script for export with split file option. Now how does import script works with split file. can you have ready script for import ?

Tom Kyte
December 09, 2004 - 3:16 pm UTC

the import command is right there in the script itself.... just remove the show=y

A reader, December 10, 2004 - 2:43 pm UTC

How do I merge the files that are split into many files


Tom Kyte
December 10, 2004 - 8:10 pm UTC

did you read the entire script, the answer lies therein. I demonstrated how to import right there from the split files...


cat `echo $FN.* | sort` | gunzip > $PIPE &

imp userid=sys/o8isgr8 file=$PIPE full=y


Possible to imp directly from the same pipe exp uses?

Kim Berg Hansen, January 17, 2005 - 8:31 am UTC

Hi, Tom

We're about to move/upgrade our 8.1.7 base on a Tru64 Alpha machine to a 10g base on an HP-UX Itanium machine.

The base is approx. 80GB data and 150GB indexes.

To keep the upgrade simple and because of big-endian/little-endian conversions we plan to use exp/imp (we can have a 36 hour downtime window and a text export of the 80GB data finishes in 2½ hour - so there *should* be enough time to use exp/imp :-)

First the idea was to exp on Tru64, ftp to HP-UX and then imp on HP-UX.
But we didn't have enought disk-space on Tru64, so instead we made an NFS-share on the HP-UX and mounted it on the Tru64 machine - which gave us the added benefit of skipping the ftp step so we just exp on Tru64 to the NFS share and then imp on HP-UX. (This method I'm testing right now and it works dandy.)

Now I'm reading up on your various recommendations about imp/exp and came across this method of using pipes - and I got an idea (or maybe just a silly brainstorm :-)

Would it be possible to create a pipe on that NFS share and have exp on the Tru64 export to the pipe while imp on the HP-UX imported directly from that same pipe?
Can a pipe be used between two machines at all?
Would that give timing problems (ie. what if imp reads data faster from the pipe than exp feeds the pipe)?
Would there be big/little-endian problems using a pipe?

Hope you can tell me if that idea is reasonable or whacko :-)


Regards

Kim Berg Hansen

Tom Kyte
January 17, 2005 - 8:43 am UTC

named pipes don't work over the network.

One idea might be to upgrade the 817 database to 10g and either

a) use data pump, it streams over the network -- no disk.
b) use transportable tablespaces




Okay - it was whacko :-)

Kim Berg Hansen, January 17, 2005 - 9:14 am UTC

Thanks for your quick response (must have hit the right time of day :-)

Okay, since that idea wasn't so good then I can just keep to regular exp to 2GB files on that NFS share and imp'ing those files. The pipe idea wouldn't shave more than 2½-3 hours anyway and the 36-hour window should be adequately large.

Upgrading first on Tru64 and then moving it to HP-UX using more "modern" techniques is a good idea - in this case I just think that'll take more time than the other way.

Thanks again for your time.

What about NON-English data

Nurlan, January 31, 2005 - 1:43 am UTC

Hello Tom,
Your answer was really helpful. We were thinking implementing something like this. But I have one concern, namely how would all your steps work for none-english databases. You can't use cat e.g when you have none-english characters in your database, is it true?
Thanks,


Tom Kyte
January 31, 2005 - 8:11 am UTC

why not? it is just a stream filter (byte at a time)

What export parameters can affect the size of the export file?

A reader, February 12, 2005 - 4:28 am UTC

Hi Tom,

What export parameters can affect the size of the resulting export file? I wonder whether there are some parameters can, for example buffer, but I tried with two different parameter values (10M and 5M) and got the same size.

Thanks

Tom Kyte
February 12, 2005 - 12:44 pm UTC

the size of the dmp file is more or less dictated by the amount of data, nothing else.


so, rows=n will result in a much smaller dump file than rows=y (since you don't extract the data)

it is all about the data, not about buffer sizes (memory), readsizes and so on. just the amount of data you have to export.

A reader, February 12, 2005 - 5:07 pm UTC

Hi Tom,

I tried two exports on table A on database A and table A on database B. The data contents of table A is the same for the database A and B. However, I found that the resulting export file size is different for these two exports. And when I type "bdiff export_on_A.dmp and export_on_B.dmp" on Solaris OS, it shows a lot of differences (which include some DDL differences such as initial extent size difference, and also the binary differences which I believe not related with the DDL) with these two export files. It puzzles me that why there are binary differences between these two files. Database A is on machine A and database B is on machine B. The OS and the database version of these two environment is the same. The exports was done on machine A on database A, and machine A on database B (via TWO_TASK). Do you know the reason why the export file sizes are differences with these two exports?

Thanks

Tom Kyte
February 12, 2005 - 5:19 pm UTC

export is a logical data dump tool

I would be incredibably SURPRISED to see them be the same.

there is lots of binary data in there, not just your data, but Oracle's data.


you don't say "how different in size", but you already pointed out "they have different physical structures, the ddl is different"

you cannot compare apples and toaster ovens. You do not have "the same things" here, you have two logical dumps of some data. nothing more, nothing less.

A reader, February 12, 2005 - 10:33 pm UTC

Hi Tom,

Do you mean that if Oracle's data is different (such as DDL of the table), then the resulting export file size may be different?

Thanks

Tom Kyte
February 13, 2005 - 9:13 am UTC

absolutely -- the DDL is right there in the dmp file, you saw that, you told us the initial extents in the dmp files where different.

A reader, February 13, 2005 - 5:43 pm UTC

Hi Tom,

But apart from the DDL different (which shown as plain text) shown in the "bdiff" command, we also found that there are binary differences in the dump file. If the binary data (in the dump file) is related with the data, does it mean that there are data differences in these two dump files?

Thanks

Tom Kyte
February 13, 2005 - 6:52 pm UTC

THEY ARE LOGICAL DUMPS

there is binary data that is yours in there.

there is binary data in there that is ours.

hey -- maybe we are storing the length of a string for example in a 2 or 4 byte integer (and doing byte swapping when needed). so, the different length ddl would account for that.


UNLESS you copy the dmp file, i would *expect* them to be just two files that pass in the night and have no relation to eachother.

Database Export

Raghav, February 14, 2005 - 2:18 am UTC

Hi Tom,

Is there any such constraint (size) in windows environment? If so, is there any remedy for the same.

Thanks in Advance
Raghav

Tom Kyte
February 14, 2005 - 8:34 am UTC

see the parameters file= and filesize=

you can have export create multiple files of filesize bytes maximum, if you hit a problem.

Export/Import

Ajay Prakash, February 23, 2005 - 4:41 am UTC

Hi Tom,The article is good but my requirement is to import the data of a table in Unix envioronment from a export dump file generated by Window 2000.
The file size is 1GB only.Please guide....
Regards,

Tom Kyte
February 23, 2005 - 9:13 am UTC

ftp in binary mode and go

nothing "special" out it.

just make sure to binary ftp the file, don't let windows mess it up for you.

A reader, May 16, 2005 - 5:06 am UTC


export greater than 2GB in AIX 4.0.0/oracle 8.0.4

anil sharma, May 19, 2005 - 2:35 am UTC

while exporting data more then 2GB the export terminates unsuccessfully giving error exp 00002.will the script for using named pipes for exporting be helpfull in this case. Is there any other solution to this problem.

Tom Kyte
May 19, 2005 - 7:50 am UTC

with 804, you'll have to use named pipes to exceed 2gig if you don't have large file support. 804 did not have the files= option for export yet.

but -- i pray you are not using this as 'backup' tool right?

exp>2gb 0n oracle 804 on aix 4.0.0

anil sharma, May 28, 2005 - 2:20 am UTC

hi tom,
the suggestion of named pipe for exporting more then 2 gb was interesting and successfull with compression as well as with cat. however when the same files were uncompressed and imported, the import failed with read error.since 804 does not support multiple files and filesize , what is the solution to take back up of data>2gb.

Tom Kyte
May 28, 2005 - 9:18 am UTC

if you split compressed file
AND
the resulting files could not be put back together
THEN
you did something wrong or your OS is botched.

imp cannot tell when the file came? is this an OS bug perhaps? You might contact them and see if there was something known to be wrong with the OS way back then.


But, seeing as how EXPORT IS NOT A DATABASE BACKUP tool, the way to take backups >2 is to take a backup. A real backup, a database backup. export is a way to move or copy data from database 1 to database 2

A reader, December 14, 2005 - 2:01 pm UTC

Tom,

In 10G R1 expdp had limitations with pipes is the restriction still there for 10G r2 as well.

Thanks,
Rahul.

Tom Kyte
December 14, 2005 - 2:24 pm UTC

no pipes, the architecture is *very* different. expdp - the simple command line interface - does not write to the output files, the database server itself does this.

How about compressing backups ?

Shalini, December 28, 2005 - 11:29 pm UTC

Tom,

The pipes - as being used for export, can be used for backups ?
e.g. I'm taking hot backup of database but dont have enough space in the server to keep the backup - around 40G uncompressed. Can I somehow split/compress this backup while the backup thru rman is running ?

With Regards,
Shalini.

Tom Kyte
December 29, 2005 - 10:17 am UTC

Well, in current releases there is support for compression. Unless rman can compress, no, you have to wait until the backup exists to compress it (or backup to a compressed filesystem if you have such a thing in your OS)

Whit Compress

Maria Saavedra, January 26, 2006 - 10:02 am UTC

Hi Tom!!..
Thanks for your help, this web site is amazing!

Can you tell me how i can use your script but only using compress and not split?

I need to compare whit my script because i have so many pipe process hanging up in the server... (SunOS 5.6)

Thanks

Tom Kyte
January 26, 2006 - 10:39 am UTC

( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &

could become


( cat < $PIPE ) | split -b $MAXSIZE - $FN. &

for example (simpliest "fix")

or

split -b $MAXSIZE $PIPE $FN. &

Only compress

A reader, January 26, 2006 - 10:44 am UTC

Tom..

Me again.

About your answer.. I do not need split because my DB is really small... i only need COMPRESS ..

Please check.

Tom Kyte
January 26, 2006 - 10:50 am UTC

sorry, read that backwards....

( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &

would/could become

gzip < $PIPE > $FN.dmp.gz &

for example (simple scripting here - best for you to have someone you work with that does scripts help you out, test it, verify it)

Hanging Process

Maria Saavedra, January 30, 2006 - 7:23 am UTC

Hi Tom.
I'm using something like your script. As you see here.
I have 7 small databases in my server (Sun 5.6), and we do a export every midnight.
This csh **************************************
geoquest% more export_web.csh
#!/bin/csh
cd $ORASCRIPT
setenv ORACLE_SID web
rm expweb.csh
sqlplus sysexp/expsys @genexports_web.sql
chmod 775 expweb.csh
rm $EXPWEB/expweb.csh
cp expweb.csh $EXPWEB
cd $EXPWEB
echo setenv ORACLE_SID web > final.txt
cat expweb.csh >> final.txt
cat final.txt > expweb.csh
rm final.txt
csh expweb.csh
******************************
The sql genexports_web make by itself this csh
******************************
setenv ORACLE_SID web
rm 3_WEB.dmp.Z 3_WEB.log; mv 2_WEB.dmp.Z 3_WEB.dmp.Z; mv 2_WEB.log 3_WEB.log; mv 1_WEB.dmp.Z 2_WEB.dmp.Z; mv 1_WEB.log 2_WEB.log; rm -f WEB.pipe; /usr/sbin/mknod WEB.pipe p; compress < WEB.pipe > 1_WEB.dmp.Z &; exp sysexp/expsys file=WEB.pipe log=1_WEB.log buffer=5242880 rows=n indexes=n;

rm 3_WEB.dmp.Z 3_WEB.log; mv 2_WEB.dmp.Z 3_WEB.dmp.Z; mv 2_WEB.log 3_WEB.log; mv 1_WEB.dmp.Z 2_WEB.dmp.Z; mv 1_WEB.log 2_WEB.log; rm -f WEB.pipe; /usr/sbin/mknod WEB.pipe p; compress < WEB.pipe > 1_WEB.dmp.Z &; exp sysexp/expsys file=WEB.pipe log=1_WEB.log owner=WEB buffer=5242880;
*******************************
This is only a example like we do. So every morning we see hang process on the server, some db has till 20 schemas, and in the third or fifth they hang and never end.
Can you see the problem because we don't!

Thanks for your help Tom.



similar script for WIndows

david, February 27, 2006 - 6:33 pm UTC

Is there a similar script for WIndows? Thanks.

Tom Kyte
February 27, 2006 - 6:41 pm UTC

No named pipes on windows ala unix. So, no. (named pipes on windows are not at all like named pipes on unix)

Import using multiple compressed *gz files

Prem, May 19, 2006 - 1:48 pm UTC

Hi Tom, this is not a review. I had a doubt and this is place to go ahead.

I have 19 compressed dmp files which are named imp01.gz,
imp02.gz etc. All this have the data of one single schema.

Now i want to import without uncompressing the files because
of space issue.I tried the script which you have given. The import goes thro' the first dmp (imp01.gz) file. But it doesn't continue to the second (imp02.gz) file.It fails
saying (last few lines from the import log) :

. . importing table "BILLING_2002"
........................
Import file: expdat.dmp >
gunzip: stdout: Broken pipe
cat: write error: Broken pipe
cat: write error: Broken pipe
cat: write error: Broken pipe
cat: write error: Broken pipe

Import terminated successfully with warnings.
cat: write error: Broken pipe
cat: write error: Broken pipe
cat: write error: Broken pipe

I couldn't make out what this means. Does this indicate
2G limitation. Metalink gave me no clue on this.
Can you help me please ?

Tom Kyte
May 20, 2006 - 4:24 pm UTC

you didn't show us how you did it though, so???? how can we comment.

I showed you how I did mine:

cat `echo $FN.* | sort` | gunzip > $PIPE &


echo the filenames
piped to sort - to sort them in the correct order

so that cat gets the list of them correctly

and sends them to gunzip

which writes them to the pipe.

IF you
cat imp01.gz imp02.gz imp03.gz .... imp19.gz | gunzip > $PIPE &
then
you would reconstruct the file


if that isn't happening, then you are likely not doing that.

No Named pipes in data pump?

Getachew Geressu, June 27, 2006 - 7:36 pm UTC

Tom, your scripts and responses are very helpful.
How about using named pipes in 10GR2? I tried and tried
without success. Here is a perfectly working shell script that uses Named Pipes (except some of the parameters in the par file). But, in 10G, no luck so far:

Here is the main script (named export.job.sh) that calls the exp.par file (listed at the bottom)

###########################################################
#!/bin/sh

export DBDIR=$ORACLE_SID
export ORA_EXP_PIPE=`echo '/tmp/datapump'$DBDIR'_exp_pipe'`
export ORA_EXP_CMP=$DBDIR.dmp.Z
rm $ORA_EXP_CMP 1>/dev/null 2>&1
rm $ORA_EXP_PIPE 1>/dev/null 2>&1
mknod $ORA_EXP_PIPE p 1>/dev/null 2>&1
compress <$ORA_EXP_PIPE >$ORA_EXP_CMP &
$ORACLE_HOME/bin/expdp parfile=$ORACLE_EXPORT/datapump/exp.par <<EOF
system/`getpassword $ORACLE_SID`
wait
EOF

######################################################
# Here is the export parameter
######################################################
SCHEMAS=GDEMO
DUMPFILE=$ORA_EXP_CMP
JOB_NAME=expdp_job_GGTEST
LOGFILE=$ORACLE_SID.expdp.log
PARALLEL=4
STATUS=60
CONTENT=ALL
EXCLUDE=INDEX
DIRECTORY=DATA_PUMP_DIR

Tom Kyte
June 28, 2006 - 7:43 am UTC

you cannot use named pipes with data pump.

data pump between platforms

Elaine H, July 25, 2006 - 3:07 pm UTC

Sorry to keep bugging you! You are my hero though when it comes to all things database.

I still don't understand data pump. Can you data pump between 2 different versions of 10g? What about 2 different versions of 10g on 2 different O/S?

I know that if I could create an exp file that was large enough, I could reorg some data into a larger blocksize tablespace, but can I do the same using data pump? Am I just missing something in the documentation?

Thanks for any assistance or links you can provide.

Tom Kyte
July 25, 2006 - 3:13 pm UTC

the data pump stuff is backwards compatible - so you can go from version X to version Y as long as version X <= version Y.

It is cross platform as well.

why would you reorganize data into a larger blocksize tablespace? It would be a hassle to have to maintain and manage a multi-block size database - it was really designed to transport data from OLTP to warehouse/reporting system to make "ETL" easier.

But, you would just "alter table T move tablespace NEW_TABLESPACE", no need to export at all

multi-block size tablespace

A reader, July 25, 2006 - 7:07 pm UTC

Tom,

>>It would be a
hassle to have to maintain and manage a multi-block size database - it was
really designed to transport data from OLTP to warehouse/reporting system to
make "ETL" easier.

Can you please give some example on how multi-block size benefits in transporting data from OLTP to Warehouse from ETL perspective


Tom Kyte
July 25, 2006 - 7:10 pm UTC

If you have a warehouse
And that warehouse was built with a 32k block

And you have a transactional system
That happens to be 8k

And you would like to transport that OLTP data to the warehouse.....

if we didn't have multiple block size support - you couldn't.
Since we do - you can.

That is how it "benefits" - it makes it possible.

A reader, July 25, 2006 - 7:16 pm UTC

Tom,

Thanks for your quick response.

When you mean transporting data between two databases, it is in the context of "transportable tablespace features using exp/imp", correct ?

Tom Kyte
July 25, 2006 - 7:22 pm UTC

correct

Import while the export in progress

Arvind Joshi, August 25, 2006 - 3:58 am UTC

Hi Tom,

Your article was very good. But I have a 400GB database , and I am exporting to pipe , spliting at 2000M, resulting in around 100+ files. I want to save time , and want to start import parallely after the creation of around 20 files, and go on ftp'ing and supplying exp. files to imp utility on aonther server through pipe.

The export Database is 806 and import is oracle 9.2.0

Please suggest.

Thanks,
Arvind

Tom Kyte
August 27, 2006 - 8:18 pm UTC

why are you exporting?

why not just backup and restore?

Import while the export in progress

Arvind, August 31, 2006 - 12:35 am UTC

Hi Tom,
The reason for export 806 database and import into 9i is :
1. We have oracle 8.0.6 database with Oracle Application 11.0.3 on Solaris 2.6 ( 32 - bit )
2. We are moving to 9.2.0 database with Oracle Applicaiton 11.5.10.2 on Solaris 10 ( 64 - bit )
3. We Failed in Cloning the Oracle database 8.0.6 from Solaris 2.6 ( 32 Bit ) to Solaris 10 ( 64-bit )
4. We Succeeded in Cloning the Oracle database 8.0.6 from Solaris 2.6 ( 32 bit ) to Solaris 8 ( 64 bit ) . But this was not what we wanted, but was more like a test for bit part.We want 8.0.6 to run on Solaris 10, for upgrade to 9i.
5. I would like to know what will make 8.0.6 to work on Solaris 10 , so that we can upgrade from there to Oracle9i.
Or Any other way you can suggest.
5. Export / Import is not a feasable option because:
a) It takes long time
b) I am not able to start both imp/exp in parallel, though I am able to split the file at OS level , as per your notes.
c) My attempt was to make export / Import looks feasable by starting import parallel to export thus reducing the wait time.

Pls Help!

Thanks,

Arvind

Tom Kyte
August 31, 2006 - 9:10 am UTC

please utilize support for APPS stuff like this, it is not as easy as just move the database, there are lots of other things to be done in an apps migration like this.

Parallelly importing in 9i while 8.0.6 export in progress

arvind joshi, December 19, 2006 - 6:05 am UTC

Hi Tom,
I am back again with a question and I know you will have an answer!
I want to start the import into 9i while the export of 8.0.6 is in progress.
I don't want to wait for all the "split" files to be available for import.
The Command "cat `echo $FN.* | sort` | gunzip > $PIPE &" requires that the 8.0.6 export should be complete.
Please suggest how I achieve this in parallel : exporting from 8.0.6 and importing in 9i.

Thanks !
Arvind

Tom Kyte
December 19, 2006 - 8:38 am UTC

just

$ set environment for 806
$ exp ... file=$PIPE &
$ set environment for 9i
$ imp ... file=$PIPE


since you don't want to compress or split, DON'T COMPRESS and DON'T SPLIT.

Data Pump

Su Baba, March 07, 2007 - 7:34 pm UTC

- Is there a size limit of a Data pump output file? If I need to export some data beyond this limit, what are some of the workarounds?

- Does Data Pump handle LOBs?


Tom Kyte
March 08, 2007 - 10:04 am UTC

limited by the operating system's ability to create large files only.

"filesize"

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref116

can be used to cause data pump to create multiple files as they hit some threshold size (so the dump can span many files)

Yes, data pump handles lobs.