Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: October 18, 2007 - 7:44 am UTC

Last updated: July 05, 2011 - 8:58 am UTC

Version: 10.2.0

Viewed 10K+ times! This question is

You Asked

Hi,
Am trying to spool my data from an SQL query to a csv file.
my query returns nearly 50,000 rows. I takes nearly 2 hours to transfer data to the csv file.
How to reduce the time and increase the speed to export data to the csv file?
i cant use UTL_FILE package. can you suggest me what can be done on this?

Thanks.

and Tom said...

this has nothing to do with the speed of spool.

I think it has everything to do with the speed of your QUERY.

You need to make that go faster, writing to a spool file is probably very fast, consider (just using my flat script from
http://asktom.oracle.com/tkyte/flat/index.html )

[tkyte@dellpe sqlstuff]$ time flat / t > t.dat

real    0m2.086s
user    0m1.138s
sys     0m0.345s
[tkyte@dellpe sqlstuff]$ wc t.dat
  49736   68642 4917282 t.dat


50,000 records about (49,736 - t is a copy of all_objects)

about 2 seconds.


You are looking at the wrong place to make this go faster, do this:

SQL> set autotrace traceonly
SQL> set timing on
SQL> run your query


that'll be slow, no spooling, no writing - it is either the query or the network.

Rating

  (5 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

arraysize

orapsdba, October 18, 2007 - 1:33 pm UTC

Default:
SQL> show arraysize
arraysize 15

Try with max:
SQL> set arraysize 5000


80 million rows to unload

Bhavesh ghodasara, October 20, 2007 - 7:52 am UTC

Hi toam,

In my table there are 8million records. what is the best way to unload it?
Tom Kyte
October 23, 2007 - 6:59 pm UTC

depends. how do you want it unloaded.

I would use datapump. Does datapump meet your needs.

SET TERMOUT OFF

Leonid, November 01, 2007 - 2:08 pm UTC

Use SET TERMOUT OFF. It will suppress the display and greatly increase the speed of the script.

Spooling to a file doesn't finish..

Zaloa, July 01, 2011 - 5:09 am UTC

Hi Tom,

I'm trying to spool the result of an SQL statement to a file but I'm having problems with it. The spooling starts and writes 2 million records in the file but suddenly the spooling stops and stays in the same point (in the middle of a record) for about 20 minutes and then writes de last 68 records, why is this happening? It always gets stuck in the same record.

If I kill the SQLPLUS process, then the spool finishes writing the last 68 rows and also an ERROR indicating that the SQL process has been killed.

I have tried spooling from other tables and this doesn't happen.

The problem is that sometimes the spooling never ends leaving the file incomplete. Could you help me with this?

Here is the SQL file I execute:

SET pages 0
SET linesize 7000
SET TRIMSPOOL ON
SET term OFF
SET verify OFF
SET feedback OFF
spool &1._EXTRACCION_H_SALTAPM_I.dat
SELECT PKH_CODIGOSPM || ';' || CODIGOR || ';' || CORECEPTOR || ';' || CODONANTE || ';' || ESTADO || ';' || FVC || ';' || CSCANCELACIONPMIPD || ';' || DAEXTRAORDINARIA FROM H_SALTAPM WHERE CODIGOR IN (SELECT CODIGOR FROM PH_SALTAPM_IND)
/
spool OFF
SET term ON
prompt &1._EXTRACCION_H_SALTAPM_I.dat
EXIT;

Thank you very much in advance,
Zaloa

Tom Kyte
July 01, 2011 - 9:27 am UTC

the spooling is buffered - so you cannot tell what record it is 'stuck' on.

what is the query plan

you should consider setting arraysize to a larger number - it defaults to 15 rows, for dumping to a file, 500 or 1000 might be more appropriate.

Spooling to a file doesn't finish.. (2)

Zaloa, July 04, 2011 - 4:35 am UTC

Hi again!

I have tried with arraysize 1000 and arraysize 500 and the spooling doesn't finish either...
I have also tried setting array property to 5000 and the result is the same. The only difference is the point where the spooling ends, a little bit earlier when setting the property to 5000.

The query plan is the following:

PLAN_TABLE_OUTPUT
Plan hash value: 667255902

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2016K| 155M| 69011 (7)| 00:13:49 |
| 1 | NESTED LOOPS SEMI | | 2016K| 155M| 69011 (7)| 00:13:49 |
| 2 | TABLE ACCESS FULL| H_SALTAPM | 25M| 1403M| 67285 (4)| 00:13:28 |
|* 3 | INDEX RANGE SCAN | IDX_CODIGOR_IND | 41493 | 972K| 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("CODIGOR"="CODIGOR")

Any help will be apreciated, I don't know what to do!

Thank you!
Tom Kyte
July 05, 2011 - 8:58 am UTC

I didn't say the spooling with an arraysize would 'fix' this issue (as we diagnose it) but rather that the performance will be better overall if you used a larger arraysize for something like this.

Can you enable tracing - then you can "tail -f" the trace file and watch its progress. Also, can you query v$session_wait, v$session_event - to see if the session s doing anything.

Also, make sure "pause" is not accidentally on in sqlplus, make sure it isn't waiting for you to do something.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library