I am looking for settings in SQL Plus that will allow to extract data from a large table into multiple files instead of single file. I currently use UNIX's split command but it adds time to my overall processing and am hoping the spool could split the files at the file generation time. The file sizes can be determined by number of records. Example: 1 million record in each file.
# set command used
set headsep off termout off pagesize 0 arraysize 5000 trimspool on feedback off set lin 2000
spool spool_file
# SQL to extract from single table
SELECT /* parallel(4) */
...
...
FROM table;
spool off
#UNIX split:
split -l 1000000 -a 10 spool_file
Spool writes to one file at a time. So to spool to separate files, query the first N% of the rows and spool to file 1. Spool the next N% to file 2, etc.
To run this in parallel, you can have a master script that calls the others, running them in the background. e.g.:
file_1.sh &
file_2.sh &
file_3.sh &
#etc
As all the queries will start at different times this might lead to read-consistency issues (if the rows change while the process starts).