arraysize
orapsdba, October 18, 2007 - 1:33 pm UTC
Default:
SQL> show arraysize
arraysize 15
Try with max:
SQL> set arraysize 5000
80 million rows to unload
Bhavesh ghodasara, October 20, 2007 - 7:52 am UTC
Hi toam,
In my table there are 8million records. what is the best way to unload it?
October 23, 2007 - 6:59 pm UTC
depends. how do you want it unloaded.
I would use datapump. Does datapump meet your needs.
SET TERMOUT OFF
Leonid, November 01, 2007 - 2:08 pm UTC
Use SET TERMOUT OFF. It will suppress the display and greatly increase the speed of the script.
Spooling to a file doesn't finish..
Zaloa, July 01, 2011 - 5:09 am UTC
Hi Tom,
I'm trying to spool the result of an SQL statement to a file but I'm having problems with it. The spooling starts and writes 2 million records in the file but suddenly the spooling stops and stays in the same point (in the middle of a record) for about 20 minutes and then writes de last 68 records, why is this happening? It always gets stuck in the same record.
If I kill the SQLPLUS process, then the spool finishes writing the last 68 rows and also an ERROR indicating that the SQL process has been killed.
I have tried spooling from other tables and this doesn't happen.
The problem is that sometimes the spooling never ends leaving the file incomplete. Could you help me with this?
Here is the SQL file I execute:
SET pages 0
SET linesize 7000
SET TRIMSPOOL ON
SET term OFF
SET verify OFF
SET feedback OFF
spool &1._EXTRACCION_H_SALTAPM_I.dat
SELECT PKH_CODIGOSPM || ';' || CODIGOR || ';' || CORECEPTOR || ';' || CODONANTE || ';' || ESTADO || ';' || FVC || ';' || CSCANCELACIONPMIPD || ';' || DAEXTRAORDINARIA FROM H_SALTAPM WHERE CODIGOR IN (SELECT CODIGOR FROM PH_SALTAPM_IND)
/
spool OFF
SET term ON
prompt &1._EXTRACCION_H_SALTAPM_I.dat
EXIT;
Thank you very much in advance,
Zaloa
July 01, 2011 - 9:27 am UTC
the spooling is buffered - so you cannot tell what record it is 'stuck' on.
what is the query plan
you should consider setting arraysize to a larger number - it defaults to 15 rows, for dumping to a file, 500 or 1000 might be more appropriate.
Spooling to a file doesn't finish.. (2)
Zaloa, July 04, 2011 - 4:35 am UTC
Hi again!
I have tried with arraysize 1000 and arraysize 500 and the spooling doesn't finish either...
I have also tried setting array property to 5000 and the result is the same. The only difference is the point where the spooling ends, a little bit earlier when setting the property to 5000.
The query plan is the following:
PLAN_TABLE_OUTPUT
Plan hash value: 667255902
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2016K| 155M| 69011 (7)| 00:13:49 |
| 1 | NESTED LOOPS SEMI | | 2016K| 155M| 69011 (7)| 00:13:49 |
| 2 | TABLE ACCESS FULL| H_SALTAPM | 25M| 1403M| 67285 (4)| 00:13:28 |
|* 3 | INDEX RANGE SCAN | IDX_CODIGOR_IND | 41493 | 972K| 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("CODIGOR"="CODIGOR")
Any help will be apreciated, I don't know what to do!
Thank you!
July 05, 2011 - 8:58 am UTC
I didn't say the spooling with an arraysize would 'fix' this issue (as we diagnose it) but rather that the performance will be better overall if you used a larger arraysize for something like this.
Can you enable tracing - then you can "tail -f" the trace file and watch its progress. Also, can you query v$session_wait, v$session_event - to see if the session s doing anything.
Also, make sure "pause" is not accidentally on in sqlplus, make sure it isn't waiting for you to do something.