If this is a one-off process to download the data to your machine, then I'd use SQL Developer or SQLcl.
This enables you to output the data in a given format (csv, json, xml, ...) by placing a comment in the SQL query. You can then spool the results 1000 rows at a time using Top-N queries:
exec :st := 1;
exec :en := 1000;
spool file_1.txt
select /*csv*/* from (
select t.*,
row_number() over (order by ...) rn
from t
)
where rn between :st and :en;
spool off
exec :st := 1001;
exec :en := 2000;
spool file_2.txt
select /*csv*/* from (
select t.*,
row_number() over (order by ...) rn
from t
)
where rn between :st and :en;
spool off
...
Obviously you could write a script calling the above passing parameters for the file name and start/end.
You can read more about formatting the output at:
http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/ And Top-N queries at:
http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html Or on 12c, you can use the fetch first syntax:
http://www.oracle.com/technetwork/issue-archive/2013/13-sep/o53asktom-1999186.html If this will be a repeated process and/or you want to write the files to the DB server, I'd create a PL/SQL process that:
- bulk collects the data 1,000 rows at a time
- Uses utl_file to write it to a new file each time
For example:
declare
cursor cur is
select ...
from ...
order by ...;
type tp is table of cur%rowtype index by binary_integer;
arr tp;
begin
open cur;
loop
fetch cur bulk collect into arr limit 1000;
... process to write file ...
exit when arr.count = 0;
close cur;
end;
/
You can find an example of using UTL_file at:
https://oracle-base.com/articles/9i/generating-csv-files#custom-plsql